-
Hello,
First, thanks for these great models! I was wondering if I could use these models for zero-shots classification, especially for emotion detection (Ekman). While doing so, I encountered this …
-
https://lmsys.org/blog/2023-06-29-longchat/
https://arxiv.org/abs/2305.07185
https://www.reddit.com/r/LocalLLaMA/comments/14fgjqj/a_simple_way_to_extending_context_to_8k/
https://github.com/epfml…
-
Interesting paper. Regarding pretrained model I'm wondering - are they Roberta based or R-XLM? Did you evaluate performance wrt mDeberta as base model?
And finally - how would one use such/this mod…
zidsi updated
6 months ago
-
Hey,
it is the second time I encounter low results for specific models. In short, I once trained `deepset/gbert-base` with `train_msmarco_v3_margin_MSE.py` and it worked like a charm. Then I tried …
-
python: 3.7
transformers: 4.9.2
pytorch: 1.8.1
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("huawei-noah/TinyBERT_4L_zh")
model = AutoM…
-
You guys did a great job in VDU field. Congratulations!
By the way, I wonder if I can replace the mBART by RoBERTa-XLM on finetune process without doing pretraining jobs again?
-
## ❓ Questions and Help
### Before asking:
1. Search for similar [issues](https://github.com/Unbabel/COMET/issues).
3. Search the [docs](https://unbabel.github.io/COMET/html/index.html).
…
-
Following what was done by @ChainYo in Transformers, in the [ONNXConfig: Add a configuration for all available models](https://github.com/huggingface/transformers/issues/16308) issue, the idea is to a…
-
请问下,我加载bge-rerank模型的tokenizer,下面的代码解析出来的如下:
```
query='中国人你好'
title='你好中国人'
res=tokenizer.encode_plus(
query,
title,
add_special_tokens=True,
…
-
Didn't find file /home/chenwh/QE/UniTE-models/UniTE-MUP/checkpoints/sentencepiece.bpe.model. We won't load it.
Didn't find file /home/chenwh/QE/UniTE-models/UniTE-MUP/checkpoints/added_tokens.json. W…