In log analytics scenario, it's common to see so many mapped fields in an index, dynamic mapping is commonly used to make new detected fields being added to the mapping automatically, but in order to avoid mapping explosion, we have some index level mapping limit, such as index.mapping.total_fields.limit which defaults to 1000, and index.mapping.depth.limit which defaults to 20.
However, when the mapping limit breaches, all of the following indexing requests containing new detected fields will fail with an error Limit of total fields [1000] has been exceeded, a temporary solution about this problem is to increase the limit manually, but once the new limit breaches, users have to increase the limit again, that is not practical.
Describe the solution you'd like
We can give users an option to let them control the behavior when the mapping exceeds the limit, one option is that we do not index the new detected fields, just like set dynamic: false to the index mapping, new detected fields are not added to the mapping, and also not indexed, not searchable but still be stored in _source, so that the indexing request won't fail, and users can still see the unmapped fields in the document.
Because the mapping limit are index level, I think we can also add some index level settings to control the behavior when the limit breaches, something like this:
Is your feature request related to a problem? Please describe
This idea originates from https://github.com/opensearch-project/OpenSearch/issues/13089.
In log analytics scenario, it's common to see so many mapped fields in an index, dynamic mapping is commonly used to make new detected fields being added to the mapping automatically, but in order to avoid mapping explosion, we have some index level mapping limit, such as
index.mapping.total_fields.limit
which defaults to 1000, andindex.mapping.depth.limit
which defaults to 20.However, when the mapping limit breaches, all of the following indexing requests containing new detected fields will fail with an error
Limit of total fields [1000] has been exceeded
, a temporary solution about this problem is to increase the limit manually, but once the new limit breaches, users have to increase the limit again, that is not practical.Describe the solution you'd like
We can give users an option to let them control the behavior when the mapping exceeds the limit, one option is that we do not index the new detected fields, just like set
dynamic: false
to the index mapping, new detected fields are not added to the mapping, and also not indexed, not searchable but still be stored in_source
, so that the indexing request won't fail, and users can still see the unmapped fields in the document.Because the mapping limit are index level, I think we can also add some index level settings to control the behavior when the limit breaches, something like this:
Related component
Indexing
Describe alternatives you've considered
No response
Additional context
No response