Closed duboff closed 1 year ago
Pure statistical approaches to language detection are never 100% correct. Look at the confidence values, Russian is only slightly behind Macedonian. Based on the training data I've used, some of the letter sequences are slightly more likely to occur in Macedonian than in Russian.
Language.MACEDONIAN: 0.2627280495188072
Language.RUSSIAN: 0.25885698169328053
Language.SERBIAN: 0.2296931907029266
Language.BULGARIAN: 0.14850396414264333
Language.BELARUSIAN: 0.04966736018194442
Language.UKRAINIAN: 0.023019779852873307
Language.MONGOLIAN: 0.015713463654129702
Language.KAZAKH: 0.011817210253394902
Feed longer strings into the detector. Then you will get more reliable results. An interesting approach to solve this problem has been proposed in #101. I will investigate whether changing the probabilities in the mentioned way will produce more accurate results.
Thanks @pemistahl. The issue for me was that this specific string does not appear to be dramatically correct Macedonian. I don't quite understand the algo used to point out why this is happening though
Based on the training data I've used, the letter sequences in the text "как дела"
are slightly more likely to occur in Macedonian than in Russian. The library does not know anything about semantics, i.e. the meaning of the words. It's all about statistics, i.e. probabilities for certain letter sequences, also called n-grams.
I've briefly explained the algorithm in section 5 of the readme.
Hi team, great library! Wanted to share an example I stumbled upon, when detecting the language of a very short basic Russian text. It comes out as Macedonian, even though as far as I can tell it's not actually correct Macedonian but is correct Russian. It is identified correctly by AWS Comprehend and other APIs: