-
```
Evaluating: 0% 0/38 [00:00
-
Hi, I was trying to adapt K-BERT for RoBERTa and tried using the pre-trained model for RoBERTa from Huggingface for that. But somehow, the model never seems to converge at all and gives very poor scor…
-
通过下面命令对孪生网络进行微调时,报错。
`python finetune/run_classifier_siamese.py --pretrained_model_path chinese_roberta/pytorch_model.bin --vocab_path chinese_roberta/vocab.txt --config_path chinese_roberta/config.j…
-
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
Is the model that works best now llmlingua-2-xlm-roberta-large-meetingbank? If …
-
Hi -
could you provide a code snippet for how to load the model weights from
https://transformer-models.s3.amazonaws.com/2019n2c2_tack1_roberta_pt_stsc_6b_16b_3c_8c.zip
into the Roberta mod…
-
Hi all,
If one runs the `evaluate.py` script against our transformation (#230), the results are very strange. The performance is too good, considering the dramatic changes made by our transformatio…
-
Hey, I am unable to load the model from the huggingface checkpoint. Here is the code and the error:
```py
from DictMatching.moco import MoCo
from utilsWord.test_args import getArgs
from transfor…
-
I am using your model to fine-tune on binary classification task. ( **Number of classes =2**) **instead of 16.**
**My class labels are just 0 and 1**
https://huggingface.co/unitary/unbiased-tox…
-
Hi ,
When using roberta-large-openai-detector for multiclass classification , I am getting below error:
`RuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:
size mis…
-
Like BertScore and BLEURT, MoverScore is another modern transformer-based reference-based summerization metric.
However, we did not include it in our pilot study. Now maybe a good time to add it. …