saffsd / langid.py

Stand-alone language identification system
Other
2.3k stars 317 forks source link

Hard-coded lookup for very short strings? #50

Open bittlingmayer opened 8 years ago

bittlingmayer commented 8 years ago

It's understandable that performance for very short strings is poor. Could we create a mapping with hand-assigned weights for those?

I believe strings like 'yeah', 'no', 'si', 'haha', 'hehe' and so on should always be classified reasonably. I am happy to donate my mapping for this.

saffsd commented 8 years ago

Hi! Thanks for the suggestion. How do you see such a mapping being used? Is there a hardcoded relationship (e.g. "yeah" -> "en"), or is it somehow used to modify the weights?

bittlingmayer commented 8 years ago

It is totally hardcoded, but still included probabilities, where applicable to many languages.

I think it could gradually be extended to have some fuzzy bits, to get more coverage. As a first step, I have made the matching somewhat fuzzy. (So 'siiii' and 'nooo' still are covered, I canonicalise them to 'si' and 'no'.)

In some cases the language returned is 'und'.

Overall I see it as all upside, there is no benefit leaving these things to chance, and no risk in a hardcoded mapping, provided it is done thoughtfully.

tripleee commented 8 years ago

Just out of curiosity, what do you map these to? "No" is a valid and common word in at least Spanish, Italian, French, and English. "Si" could be either Spanish or Italian (though improperly accented) or marginally French, and of course, both words also exist as less common words in many other languages. I don't even think I can imagine what you map "haha" and "hehe" to, though I guess they are more common as test strings in some regions (French and German speaking regions?)

bittlingmayer commented 8 years ago

I split it somewhat evenly between the languages where it is somewhat possible as a complete standalone sentence (after removing diacritics). So although ' si ' may be more likely in Romanian or Albanian text, the sentence ' Si. ' is not likely in Romanian or Albanian.

Out of curiosity, does the model treat beginning of string and end of string as a character of sorts? That would partly remedy the above.

For 'haha', 'hehe' and ':-)' I believe it's most useful to clients to return 'und'.

But again, I don't really wish to get dragged into the details of the mapping for strings like 'no'. (As it stands, 'yeah' returns 'id' and '¡No!' returns 'zh'. We can't do worse than that.) The argument that there's no perfect answer is understood. Fundamentally we should be able and happy to incorporate a mapping of the top 1M queries/sentences and some "golden" probabilities. We can start with 10, then add a 100 and so on. Queries follow a somewhat Zipfian distribution, so a little bit yields a lot of coverage.

EralpB commented 6 years ago

I think that's a genius idea, "und" is a great example. I'm almost certain any text which uses "und" is german. Especially if it's short text (let's say in the order of a tweet)

bittlingmayer commented 6 years ago

That's the "stop words" or "function words" approach, and it is also very effective.

To be clear though, above when I wrote 'und' I meant not the natural language string but the code returned for 'undetermined language'.