-
### Issue:
I was testing the package on huggingface "xlm-roberta-base" model and it failed with the following error.
`IndexError: index out of range in self`
------------------------------------…
-
After training on a seperate machine we got some promising results, and we are now looking to move our model into production. However we encounter an issue. Downloading missing files and verifying the…
-
The line: AutoTokenizer.from_pretrained("/data/lilinfang/clv/xlm-roberta-base")
use this can work out well.
however your code is
The line: RobertaTokenizer.from_pretrained("/data/lilinfang/clv…
-
Hi,
It seems from the source code that XLM Roberta is finetuned with the gradient updates based on the LSTM attention model. However, when I follow the README instructions and train the model on hi…
-
Code breaks using a different model other than BERT. I debugged into the code and found that the code is written with respect to BERT tokenizer only while the tokenizers of other transformer models ar…
-
### Describe the issue
Greetings,
Are there any plans on releasing instructions or at least the dataset format so we can fine-tune the `llmlingua-2-xlm-roberta-large-meetingbank` or the base `xlm-…
-
When I am training the XLM-Roberta based QE system, I pre-downloaded the pre-trained XLM-Roberta model from huggingface's library and modified the field `system.model.encoder.model_name` in `xlmrobert…
-
It would be helpful to see those models from HF on the leaderboard:
* `xlm-roberta-base` (base of HerBERT)
* `xlm-roberta-large` (base of HerBERT)
* `facebook/xlm-roberta-xl` - needs more VRAM
* `…
-
**Describe the bug**
Cannot export the model.
**To Reproduce**
```
import keras
from keras_nlp.models import XLMRobertaPreprocessor, XLMRobertaBackbone
import tensorflow as tf
preprocessor …
-
Hello 👋
I tried using `ContextualWordEmbsAug` using the `xlm-roberta-base` model but it seems to be not supported. I needed it to do augmentation on a language that is not available in the `bert-base…