facebookresearch / stopes

A library for preparing data for machine translation research (monolingual preprocessing, bitext mining, etc.) built by the FAIR NLLB team.
https://facebookresearch.github.io/stopes/
MIT License
251 stars 37 forks source link

Using stopes with an unseen language #16

Open sete-nay opened 2 years ago

sete-nay commented 2 years ago

Hi, I'm trying to clean and preprocess bitext for finetuning NLLB on a new unseen language. The source language is a part of laser3, the target language is not included. Will it work if I replace laser3 with BPE encoder pre-trained on my target language? Thank you!

python -m stopes.pipelines.bitext.global_mining_pipeline src_lang=fuv tgt_lang=zul demo_dir=.../stopes-repo/demo +preset=demo output_dir=. embed_text=laser3

Mortimerp9 commented 2 years ago

the laser3 encoder will project your text into an embedding space that is language independent. The way that mining works is that it aligns projections of the sentences from the src_lang into that space, with projections of the sentences from the tgt_lang into that space. This works because they are projected in the same language independent space and we can compute a distance between the embeddings of each sentences.

If you use a different encoder, it will probably not project into a compatible space.

sete-nay commented 2 years ago

Thanks, will try it with laser3. What should I indicate in tgt_lang for the unseen language?

avidale commented 2 years ago

What should I indicate in tgt_lang for the unseen language?

You can assign any name you want to the new language. If this name is abc, then you will need to indicate tgt_lang=abc in the entry command.

Also, you need to make sure that the mining config is correctly showing how to find source files for that language. In case of using the demo config (+preset=demo in your command, which corresponds to this configuration), you will need to have the following two files:

  1. $demo_dir/abc.gz with the source text in your language.
  2. $demo_dir/abc.nl with the number of lines of the file above.

Finally, you will need to add the path to your custom encoder (and its vocabulary, if it is also custom) to the lang_configs part of the demo config.

heffernankevin commented 2 years ago

Hi, I'm trying to clean and preprocess bitext for finetuning NLLB on a new unseen language. The source language is a part of laser3, the target language is not included. Will it work if I replace laser3 with BPE encoder pre-trained on my target language? Thank you!

python -m stopes.pipelines.bitext.global_mining_pipeline src_lang=fuv tgt_lang=zul demo_dir=.../stopes-repo/demo +preset=demo output_dir=. embed_text=laser3

Hi @sete-nay, out of curiosity what is your tgt_lang? LASER3 + LASER2 covers over 200 languages. If the target lang isn't covered by LASER3, it may be included in LASER2. You can find the list of supported languages for LASER2 here. If it's not in either of them, you could even try to create your own LASER3 encoder and mine using this. The training code to do so is here.

sete-nay commented 2 years ago

Hi @heffernankevin, my tgt_lang is Circassian (Kabardian) and not a part of laser2 or 3, unfortunately. Thanks for the hint, will look into laser encoder training or otherwise just use a simpler tool. My goal is to create parallel corpus that can be used for finetuning NLLB or another multilingual model on Circassian language.