The default text preparation pipeline does not do any unicode character normalization, while a few of the default normalizer regexes do not include the combining diacritics in the alphabetic/word-like ranges. As a result, the resulting tokens will be split at any character that uses a combining diacritic:
In addition, the ascii-folding filter ignores (i.e. copies) the combining diacritics, so if the developer specifies the diactritics as word-like chars in the config
"keep_special_chars": r"\@\[\]'\u0300-\u036f"
Then those diacritics will be retained in the tokens, while the composed characters will be folded to characters without diacritics.
So the only solution available to the developer is to create a custom preprocessor that handles the character normalization, which is a fair bit more overhead for something that should be easy, if not the default.
The default text preparation pipeline does not do any unicode character normalization, while a few of the default normalizer regexes do not include the combining diacritics in the alphabetic/word-like ranges. As a result, the resulting tokens will be split at any character that uses a combining diacritic:
In addition, the ascii-folding filter ignores (i.e. copies) the combining diacritics, so if the developer specifies the diactritics as word-like chars in the config
"keep_special_chars": r"\@\[\]'\u0300-\u036f"
Then those diacritics will be retained in the tokens, while the composed characters will be folded to characters without diacritics.So the only solution available to the developer is to create a custom preprocessor that handles the character normalization, which is a fair bit more overhead for something that should be easy, if not the default.