-
Example: https://huggingface.co/jbochi/madlad400-3b-mt/tree/main
In Googles own space: https://huggingface.co/google/madlad400-10b-mt
The guy converted the format of the 3 smallest models (3b,7b,1…
-
## ❓ Questions and Help
### Before asking:
1. search the issues.
2. search the docs.
#### My question?
I am trying to fine-tune the NLLB model on the Moroccan Arabic and English languages. I pr…
-
我有两台电脑一台西文一台中文用新版的翻译程式,觉得旧版的准确率比较高,新版的速度比较快,可是有时候会翻译不到,或是没有加上@@ 或是 @$,還有一次出現三行出現相同翻譯(新版),不管原文是什麼,所以确定不是电脑的问题,因为一台刚好翻译西文,一台中文做有交叉比对,小弟我小小的心得目前还在努力研究中辛苦Abse4411大大了,
-
Hi,
I am a researcher working on Low resource languages native to sri lanka (which is Sinhala and Tamil). NLLB mined dataset is a excellent start point for us. So i am using the instructions provided…
-
Greetings Everyone,
I am starting to learn Deep Learning (especially Machine Translation). Recently I found that Facebook released pre-trained models like **M2M100** and **NLLB200**. In HuggingFace…
-
## ❓ Questions and Help
I want to test finetuning nllb models(3.3B) , I followed the doc in [Finetuning NLLB models](https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling)
wit…
-
## ❓ Questions and Help
#### What is your question?
How can I finetune the NLLB multilingual translation model on multiple language pairs together?
I find the document for finetuning NLLB but it …
-
```cmd
###################################
# Whispering Tiger is starting... #
###################################
Websocket: Server started.
Initializing medium NLLB-200 model.
Downloading medi…
-
Originally from https://github.com/facebookresearch/LASER/tree/main/data/nllb200, even to curl 1000 urls took almost 2 hours, why not just include the text data?
-
In the paper , you wrote in the assamese language you have 738k mono text and 43.7k Bitext. But we are geeting only 1912 assamese-english pair data. Can you pls provide us the whole dataset i.e mono 7…