pemistahl / lingua-py

The most accurate natural language detection library for Python, suitable for short text and mixed-language text
Apache License 2.0
1.1k stars 44 forks source link

Weird issues with short texts in Russian #100

Closed duboff closed 1 year ago

duboff commented 1 year ago

Hi team, great library! Wanted to share an example I stumbled upon, when detecting the language of a very short basic Russian text. It comes out as Macedonian, even though as far as I can tell it's not actually correct Macedonian but is correct Russian. It is identified correctly by AWS Comprehend and other APIs:


detector = LanguageDetectorBuilder.from_all_languages().build()
detector.detect_language_of("как дела")
Language.MACEDONIAN
pemistahl commented 1 year ago

Pure statistical approaches to language detection are never 100% correct. Look at the confidence values, Russian is only slightly behind Macedonian. Based on the training data I've used, some of the letter sequences are slightly more likely to occur in Macedonian than in Russian.

Language.MACEDONIAN: 0.2627280495188072
Language.RUSSIAN: 0.25885698169328053
Language.SERBIAN: 0.2296931907029266
Language.BULGARIAN: 0.14850396414264333
Language.BELARUSIAN: 0.04966736018194442
Language.UKRAINIAN: 0.023019779852873307
Language.MONGOLIAN: 0.015713463654129702
Language.KAZAKH: 0.011817210253394902

Feed longer strings into the detector. Then you will get more reliable results. An interesting approach to solve this problem has been proposed in #101. I will investigate whether changing the probabilities in the mentioned way will produce more accurate results.

duboff commented 1 year ago

Thanks @pemistahl. The issue for me was that this specific string does not appear to be dramatically correct Macedonian. I don't quite understand the algo used to point out why this is happening though

pemistahl commented 1 year ago

Based on the training data I've used, the letter sequences in the text "как дела" are slightly more likely to occur in Macedonian than in Russian. The library does not know anything about semantics, i.e. the meaning of the words. It's all about statistics, i.e. probabilities for certain letter sequences, also called n-grams.

I've briefly explained the algorithm in section 5 of the readme.