Description
We have a maximum allowed secondary index entry size (~64k). If an entry that exccedes that size exists during index creation, the index creation will fail with proper error message. However, if the index already exists and some document is encountered during ingestion causes the index to have an entry more than the allowed size, then ingestion is going to repeatedly fail in the background due to storage failure. We need to revisit the system behavior when this case in encountered.
Attachments
Issue Links
- relates to
-
MB-38743 Failure when creating secondary index on fields in a dataset that have large sized values
- Closed