Open fschlaeppi opened 8 years ago
Hi, I see so many requests for auto-language detection and use right analyzer for indexing. Is this issue currently being tracked ? Any response is really appreciated. Regards
With the removal of _analyzer being specified in the query (in https://github.com/elastic/elasticsearch/issues/9279), auto selection of the analyzer for a field doesn't really make sense as far as I can tell. Each field has only a single analyzer associated with it, so you can't really analyze on the fly based on lang detect.
So either you are putting your content into a field that is agnostic about the analyzer and doing to lang detection to filter on, or you make one call to determine the language of your content, and then index your data to the appropriate field for the appropriate analyzer.
So for instance we have separate fields like:
I can implement the following.
The scenario is like this: First, configure a mapping with languages you want to detect in languages
. Then, configure other fields where to map the text of successfully detected languages to, in a parameter language_to
.
{
"someType" : {
"properties" : {
"someField":{
"type" : "langdetect",
"languages" : [ "de", "en", "fr", "nl", "it" ],
"language_to" : {
"de": "german_field",
"en": "english_field"
}
},
"german_field" : {
"analyzer" : "german",
"type": "string"
},
"english_field" : {
"analyzer" : "english",
"type" : "string"
}
}
}
}
In this example, submitting a text "This is a small example of english text"
to someField
will index en
to thesomeField
field with langdetect
type, but also the text will be passed to the field english_field
. German text would be indexed into field german_field
, using a different analyzer.
It is up to the user to configure the field analyzers and the language_to
mapping appropriately. There are cases where detected languages don't have a Lucene language analyzer. So it is not possible to implement a total automatic scenario, covering all languages that can be detected, and covering all Lucene language analyzers.
Another issue is indexing multilanguage text into a single field. Here I recommend the ICU analyzer. ICU can apply normalization / folding / tokenization based on Unicode scripts which is the best method to search for multilanguage in a single field. Stemming is not applied.
Released version 2.4.4.1 with the language_to
feature.
@jprante for language detection can you provide a default/fallback language_to
when threshold is below some confidence level? (which probably would point at an analyzer that is somewhat language neutral with ICU that is safer across a lot of languages but not perfect).
Hi jprante,
Small question that might be useful for some people I guess.
Is there a way, at index time, to apply the right analyser based on the result of the language detection? If yes, could you provide us with a code example?
Thanks in advance, F