-
when I run finetune,it tell me:
Traceback (most recent call last):
File "/opt/conda/bin/m4t_finetune", line 8, in
sys.exit(main())
File "/opt/conda/lib/python3.10/site-packages/seamless_c…
-
### System Info
@xenova/transformers": "github:xenova/transformers.js#v3
Using PopOs 22.04 in a Node.js environment
### Environment/Platform
- [ ] Website/web-app
- [ ] Browser extension…
-
### Model description
With the recent support for custom models, is it possible to run [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-dist-200M) . It is basically nllb with a cus…
-
In [`2645700`](https://github.com/OpenVoiceOS/status/commit/2645700193375ce60789520d89a35946bd1821d6
), Translator - NLLB - Smart'Gic (https://translator.smartgic.io/nllb/status) was **down**:
- HTTP …
-
Navigate to https://github.com/facebookresearch/fairseq/tree/nllb and clone the repo. Before running the install instructions, add the following to the setup.py script:
After line 270, add the foll…
-
Hi,
I have used all the different NLLB models for Japanese to English, and English to Japanese translations. I have observed that the translation quality of NLLB-200(Dense, 3.3B) is very bad when c…
-
I'm using the below code which will try to translate from Romanian to English
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebo…
-
The first round of research was not too positive, but more is needed to fully test this path out. It appears that SILNLP may also be broken for fine tuning on multiple language pairs.
-
Ideas to enable this could be:
* Train a model with the mid-verse USFM tokens (or replacement tokens) that it learns to place properly.
* Reinsert the USFM after generation of the draft by using the…
-
Each of the micro service in VE could maintain a JSON file with the items whose versions it want to expose in its code base. And an API could be used to fetch these values.
Example, In CMS, as we h…