-
RuntimeError: mat1 and mat2 shapes cannot be multiplied (11264x1024 and 768x6)
当
```
'roberta': (
'transformers.BertTokenizer',
'transformers.RobertaModel',
'transforme…
-
D:\asdasd\AI\GPT-SoVITS-Server-main\GPT-SoVITS-Server-main>python server.py
DirectML可用,将使用DirectML进行推理加速。
设备名称: NVIDIA GeForce GTX 1650
Traceback (most recent call last):
File "D:\asdasd\AI\GPT-…
-
我要做的项目是要求用roberta的,请问各位大佬我可以直接把chinese-roberta-wwm-ext对应的模型和vocab放在 bert_pretain目录下然后直接训练吗?
-
OSError: Can't load '../init_model//roberta_wwm_ext/config.json'. Make sure that:
- '../init_model//roberta_wwm_ext/config.json' is a correct model identifier listed on 'https://huggingface.co/mode…
-
Hi, wondering if you can help...I.m facing an issue when loading the main.py
Traceback (most recent call last):
File "C:\CP\LocalAIVtuber\venv\lib\site-packages\transformers\utils\hub.py", line …
-
```
错误报告如下,这是什么问题,该如何解决啊
usage: main.py [-h] --train-data TRAIN_DATA [--val-data VAL_DATA] [--num-workers NUM_WORKERS] [--logs LOGS] [--name NAME] [--log-interval LOG_INTERVAL] [--report-training-ba…
-
请问,在用bert-base-case、chinese-bert-wwm-ext、chinese-roberta-wwm-ext、chinese-roberta-wwm-ext-large这几个预训练模型跑多标签分类实验的时候都没问题,为什么使用roberta-xlarge-wwm-chinese-cluecorpussmall这个预训练模型跑多标签分类实验,在训练过程中一直
accuracy:…
-
请问HFL Chinese RoBERTa-wwm-large-ext的模型是通过哪个脚本进行转换成UER格式呢?是convert_bert_from_huggingface_to_uer.py这个吗?
i wonder how can i convert HFL Chinese RoBERTa-wwm-large-ext model into UER style ?which script s…
-
import torch
from PIL import Image
import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
print("Available models:", available_models())
# Available models: ['…
-
Hello, I was wondering if we can replace the BERT for current supported models
```python
Languages.JP: BASE_DIR / "bert" / "deberta-v2-large-japanese-char-wwm",
Languages.EN: BASE_DIR / "…