-
代码:model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
运行报错缺少配置文件
ValueError: Unrecognized model in XXX. **Should have a `model_type` key in its config.json**, or contain one of t…
-
Trying to compile xlm-roberta-xl fails. This is probably because the model is either too large for the server to unpack or too large to send over
When the server sends the compiled model to the clien…
-
This is not an issue but a question. I discovered that the TweetNLP demo can classify multilingual texts, including Turkish. Can I classify Turkish texts with this version? I haven't tried it yet, so …
-
Following what was done by @ChainYo in Transformers, in the [ONNXConfig: Add a configuration for all available models](https://github.com/huggingface/transformers/issues/16308) issue, the idea is to a…
-
model_name="vicgalle/xlm-roberta-large-xnli-anli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
IndexError …
-
Hi,
I'm trying to use cross encoder and I used a fine-tuned model (BERT-base) as a pretrained model, but I got this error:
ValueError: Unrecognized model
Should have a `model_type` key in its co…
-
### Describe the issue
### Context
I encountered an error while attempting to convert a Microsoft Phi3 model to ONNX format using Python and the Transformers library. The conversion process fails wi…
-
请问下,我加载bge-rerank模型的tokenizer,下面的代码解析出来的如下:
```
query='中国人你好'
title='你好中国人'
res=tokenizer.encode_plus(
query,
title,
add_special_tokens=True,
…
-
I am trying to follow your code for making custom longformer for XLM models (typically XLM-Roberta), however, I get NaN values as soon as I start training my models for a downstream classification. He…
-
Hi @NielsRogge,
Many thanks for your notebook [Create_LiLT_+_XLM_RoBERTa_base.ipynb](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LiLT/Create_LiLT_%2B_XLM_RoBERTa_base.ipynb).
…