-
Hi All,
I am not sure if this is a bug or more of a request for an example/guidence. I am trying to use NLLB for translation at scale and use multiple GPUs for inference but I cannot figure how to …
-
### Model description
Thanks for supporting NLLB and closing this issue https://github.com/huggingface/transformers/issues/18043. I'm wondering if huggingface can further support the language identif…
-
Hello, I saw your youtube and github code.
So Thank you!
Thanks to this, I was able to run it in Colab. When I ran it, I was able to observe that it showed really good performance.
I want to …
-
Hi,
I follow the instructions from LASER [https://github.com/facebookresearch/fairseq/blob/nllb/examples/laser/README.md](README) and train a LASER model with my own data (Chinese-English Bitext).
B…
-
Right now, we cannot set/use a custom path for the Translation pipeline. The path set in the parameter is only used as a default fallback in the event that the language pair doesn't have a Helsinki OP…
-
Hi team,
The opportunity of parallel translation (in a single batch) from different source languages is of a particular interest,.
The current obstacle lies in the fact that the tokenizer depends…
-
Hello @guillaumekln, thanks for adding the NLLB support.
I just tried the model conversion: `ct2-transformers-converter --model facebook/nllb-200-distilled-600M --output_dir nllb-200-distilled-600M…
-
Originally from https://github.com/facebookresearch/LASER/tree/main/data/nllb200, even to curl 1000 urls took almost 2 hours, why not just include the text data?
-
Hi! Thank you for your great work and making it publicly available!
I am trying to use your NLLB model and thanks to the huggingface integration it is easy to do.
However, you have also published…
-
In the paper , you wrote in the assamese language you have 738k mono text and 43.7k Bitext. But we are geeting only 1912 assamese-english pair data. Can you pls provide us the whole dataset i.e mono 7…