Niger-Volta-LTI / iranlowo

Ìrànlọ́wọ́ is a utility library for analysis & (pre)processing of Yorùbá text → https://pypi.org/project/iranlowo
MIT License
17 stars 8 forks source link

Language Identification Helper #8

Open Olamyy opened 5 years ago

Olamyy commented 5 years ago

I'm proposing a language identification helper module that can:

  1. Be used to build language id models using any of the rule based or learning algorithm available for doing this.
  2. Be used to identify languages.

Proposing a usage similar to:

from iranlowo.language import LanguageIdentifier

lang_a = 'eng'
lang_b = 'yor'
lang_a_corpus = 'path_to_corpus'
lang_b_corpus = 'path_to_corpus'

lang_model = LanguageIdentifier(langs=[lang_a, lang_b], corpus=[lang_a_corpus, lang_b_corpus], **kwargs)
lang_model.build(algo='', epoch=epoch, batch=batch, **kwargs)
lang_model.save('save_path')

Then this model can be loaded and used to identify languages like:

from iranlowo.language import identify_language, load_model

language_id_model = load_model('save_path')

language_id = identify_language(language_id_model, 'text')
ruohoruotsi commented 5 years ago

This is a great idea and API suggestion!

ruohoruotsi commented 5 years ago

I've assigned it to myself, so that I can investigate the algos for text-langauge-id.

Offhand, I know of https://github.com/saffsd/langid.py and there maybe some others based on measures of perplexity wrt to a simple n-gram or RNN language model.

ruohoruotsi commented 5 years ago

Instead of sleeping I'm researching text language id 😆 . . .

Fasttext (from Facebook, 2017) is claiming superiority over langid.py: https://fasttext.cc/blog/2017/10/02/blog-post.html

They support Yorùbá (per their supported lang ISO code list) and it seems like its pretty straightforward to train.

Next steps are to try out the FastText tools, to see if giving it text from yoruba-text, it can learn to id Yorùbá text accurately enough at the sentence and then word level.

Olamyy commented 5 years ago

Awesome. I came across an approach here https://github.com/eginhard/word-level-language-id that I've been planning to go through for sometime now. Never had the time to. It's based on the vertibi algorithm so it's rule based.

ruohoruotsi commented 5 years ago

Cool, we have some options!

While examining the FastText last night, I pulled down some 65M words from a dozen languages (their training corpus) and I thought that rather than have a model that only does Yorùbá {True, False} <-- single class classification (logistic regression) maybe:

  1. We can retrain it with other text (multi-class classification: so EN, JP, KO, ES, ...TUR, PT, .. YO, SWAHILI, ZULU} ...i.e. to be more robust in lang-id amongst other texts of the web
  2. Or train in in a way compatible with their APIs, text embeddings, so that we can submit these text-derived data-products back to the community (https://fasttext.cc/docs/en/crawl-vectors.html) I think we chatted about this on Slack, but they're using unclean Wikipedia text and I think we can do MUCH MUCH better with now our almost 1.5M words!!!