-
Hi,
Seems really interesting.
I would be interested in seeing the performance on WMT2016 and WMT2012. 😃
Good job!
-
Hello,
Your README states:
> Inputs should be tokenized and each line is a source language sentence and its target language translation, separated by (|||). You can see some examples in the exam…
-
Hi I have some bugfixes to the program
The thing is I still don't understand git so I made a public repo in bitbucket
https://bitbucket.org/fran_/elvis-fbrau/
I added some features (like it or no…
fbrau updated
4 years ago
-
Hi,
I was trying to train the using xlm-r base on the assembled training data but it doesn't converge and giving random output (24% accuracy on eng_Latn) while I gets around 53% accuracy using mbert.…
-
Hi
anyone managed to build this on Fedora 33?
i tried to add the copr repository and install cde but it says its unable to find a match.
-
Hi,
I found a weird thing that if using the multilingual-bert e.g: bert-base-multilingual-uncased, it seems like the grad_cache doesn't work. I know it sounds weird, changing different bert models…
-
Hi all
It seems like for sequence tagging tasks like WikiANN, the metrics are computed on truncated sequences (upto max sequence length). A consequence of that would be that for the same model, the m…
-
I wanted to train Chinese-based datasets, but I found that I lacked the corresponding "pytorch_model_uncased_L-24_H-1024_A-16" file. How can I get it?
-
Hi, Thanks for the great work!!
Is there any plan for code release, as described in paper?
![image](https://user-images.githubusercontent.com/8455454/157574056-89b8e958-4211-426f-b830-b83f9f79a42a.p…
TPFRL updated
5 months ago
-
hello. I download the xlm-r-100langs-bert-base-nli-stsb-mean-tokens. And I use
`"SentenceTransformer('pretrained-model/xlm-r-100langs-bert-base-nli-stsb-mean-tokens/0_Transformer')"`
and the error …