Closed lovasoa closed 8 years ago
What's the expected encoding by the way? Is it one uchardet is supposed to be able to detect currently? If I apply a change to discard charsets when invalid bytes are detected, your file just ends up as "unknown".
Well it looks like it could be one of the ISO-8859 (like ISO-8859-1 or ISO-8859-15), but without any meaning: "àðôû". In this case, it is completely normal that uchardet cannot detect the encoding. There is no way any algorithm can detect a proper charset for random bytes when many charsets are compatible with these codepoints.
This is the exemple I gave on linuxfr. This a russian word. WINDOWS-1251.
Oh, and I just realised I swapped the first two characters. It's E0 F0 FB F4
. арфы.
Ok. Well I see now it could also be MAC-CYRILLIC with the same characters. In any case, the current language models return too low a confidence (not even 0.1) for any of these encodings to be recognized with certaincy. I will keep this opened for now, and see if the Russian models can be improved, but I don't give it too much hope for uchardet ability to recognize such low-length text.
Maybe the confidence should depend on the percentage of recognised characters, and not on their number.
I don't understand what you are saying. The percentage of recognized characters is always 100%. If we don't recognize characters, it means they are invalid, then it is definitely not the right encoding.
I mean the percentage of frequent character. I don't know the formula used to determine the confidence, but doesn't the fact that it doesn't work with short character sequences means that it relies at some point on the number of frequent byte sequences that were found and not on the percentage of them ?
If not, I still don't understand why it fails at recognizing E0 F0 FB F4
. It's only very frequent russian characters encoded in WINDOWS-1251.
Well patches are accepted. :-) Just remember that uchardet still has to be generic, work with all possible languages and encoding (once frequency data has been gathered), and stay fast.
I am moving bug reports to the new hosting.
I think I will close this one though, not move it. Uchardet is not meant for detection with such short string. It is actually pretty good even with short sentences, but a single word of 4 characters, I think we are getting too close to the limits here.
For such single-words, the approach you had been proposing on linuxfr (using dictionnaries) is probably the only viable approach, though as you noted yourself, it is quite slow (and uchardet is meant for quick processing, at least quick enough processing for a comfortable desktop workflow).
Now I may be wrong and I happily welcome patches if you can implement an efficient improvement to the algorithm which will work with your exemple (while still keep fast and not break what is currently working): https://bugs.freedesktop.org/enter_bug.cgi?product=uchardet
Also if you have longer texts which are not correctly detected, do not hesitate to report them as well! :-)
Thanks for reporting your issue!
Uchardet detects this file as WINDOWS-1255 whereas it contains the octet
0xFB
, which is invalid in this charset.How to reproduce: