-
Hello,
Congratulations on this great work!
I am reaching out for pointers as I am unable to reproduce the accuracy results from the paper while using RoBERTa-Base.
I finetuned the RoBERTa-Bas…
-
`Traceback (most recent call last):
File "schema_item_classifier.py", line 463, in
_train(opt)
File "schema_item_classifier.py", line 271, in _train
model_outputs = model(
File "/h…
-
## 🐛 Bug
Computation on a TPUv4 POD crashs with an error. In order to get it run on TPUv4 I had to change one value in the spawn (see https://github.com/facebookresearch/fairseq/compare/main...sche…
-
Does anyone have reproduce the LoRA result of roberta-base? I found the reproduction result of LoRA cannot achieve the result that paper has claimed.
e.g.:
Paper claimed that the RTE has achieved…
-
The line: AutoTokenizer.from_pretrained("/data/lilinfang/clv/xlm-roberta-base")
use this can work out well.
however your code is
The line: RobertaTokenizer.from_pretrained("/data/lilinfang/clv…
-
Did we have the scripts to specify whether using roberta for text processing? Also, also we provide the ckpt w/ roberta?
-
## 🐛 Bug
Scalar Quantization does not seem to work on a pretrained RoBERTa model.
### To Reproduce
Script to run without quantization
TOTAL_NUM_UPDATES=2036
WARMUP_UPDATES=122
LR=2e-05
…
-
不管是训练还是测试,我都遇到了如下问题导致无法继续:
WARNING [09/06 16:49:31 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
text_encoder.embeddings.position_ids
-
### Motivation.
As vllm supports more and more models and functions, they require different attention, scheduler, executor, and input output processor. . These modules are becoming increasingly com…
-
I noticed the fine-tuned roberta script saved the fine-tuned model locally by
```
model_to_save = model
torch.save(model_to_save, output_model_file)
tokenizer.save_vocabulary(output_vocab_file)
…