Closed mhillendahl closed 11 months ago
The data used to build the dictionaries are pulled from the opensubtitles project. The build process is automated using script/build_dictionaries.py. I don't know of a good method to automatically find and flag each of these "cross-overs" but any help in making the build_dictionaries.py script more robust would be appreciated.
There are almost 1.5 million entries in the English dictionary. It's clear that there are far too many. But it's not just French, Spanish, and German. For example:
"開かれた": 1, "闇を切り裂いてさ": 2, "阎东生": 1, "阿昭": 1, "降り出した雪": 10, "限りがあるってのを知っていてムダにしちゃうんだろう": 2,
Python 3.9.5 Windows 10 x64
Expected Behavior Each language setting only contains itself. Words written in a different language, deliberately or by mistake, are unknown.
Observed Behavior Each language appears to contain itself plus one or more additional language(s). en contains words from English as expected, but also from Spanish, French, and German. es contains words from Spanish as expected, but also from English and French. fr contains words from French as expected, but also from English. pt contains words from Portuguese as expected, but also from English. de contains words from German as expected, but also from English.
Impact Typos in the selected language are undetectable if they incidentally match one of the extra languages. (see console output below)
Steps to Reproduce
spellCheckerTest.py
console output