-
Hello! Thanks, Tim! I tried `bitsandbytes` for language models like BLOOM, and it works well.
I have a question about NMT models like NLLB, M2M, mBART, or OPUS. I tried inference for NLLB, and appa…
-
I have this sample code, and I tested with [nllb](https://opennmt.net/CTranslate2/guides/transformers.html#nllb) and [m2m-100](https://opennmt.net/CTranslate2/guides/transformers.html#m2m-100)
```c…
-
Greetings Everyone,
I am starting to learn Deep Learning (especially Machine Translation). Recently I found that Facebook released pre-trained models like **M2M100** and **NLLB200**. In HuggingFace…
-
Hello, @kauterry! Thanks a lot for the detailed answers on other issues.
I would like to prepare data for NLLB finetuning using the pre-trained SPM model.
My question is "what is the format of these…
-
Hey,
I want to fine tune the NLLB 200 model, as instructed by the data [ReadMe](https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/data) documentation, I have tried the filtering p…
-
# Crash report
### What happened?
I'm experiencing a persistent issue with a Tkinter-based GUI application on macOS. When running the script directly in the terminal, everything functions correctly,…
-
Hello! Could you please clarify the license of **NLLB models**. This [repository](https://github.com/facebookresearch/fairseq/tree/nllb) states the MIT license, while on Hugging Face, the [model card]…
-
### System Info
I am getting following error while using accelerate for M2M100 on google colab pro. Following is the code snippet:
import torch
device=torch.device('cuda' if torch.cuda.is_avail…
-
```cmd
###################################
# Whispering Tiger is starting... #
###################################
Websocket: Server started.
Initializing medium NLLB-200 model.
Downloading medi…
-
### Feature request
Can run_translation.py support nllb model fine-tuning ? As run_translation.py is much easier to fine-tuning a model.
### Motivation
Want to an easy way to fine-tuning nllb…