-
In readme file there have a link https://github.com/ymcui/Chinese-BERT-wwm , we can download wwm model , after unzip it, we can find vocab.txt pytorch_model.bin and bert_config.json .
So the questio…
-
Thanks for your work!
I can not find this file _train-embeddings-base-1gpu.json_ mentioned in _ReadMe.md_, but found _bert-wwm-ext_literature_ file. Does the _bert-wwm-ext_literature_ file replace…
-
请问下competition_train.db是做什么的呢?
我在熟读您的代码的时候,有几个疑问:
1、Preprocessing中:
![image](https://github.com/VisualJoyce/ChengyuBERT/assets/48983651/36d2e442-0d1b-4a1a-9b53-d4cffacf79cd)
这些official_*.db是干嘛的?可以…
-
RuntimeError: The version of PaddlePaddle(1.8.1) or PaddleHub(1.7.1) can not match module, please upgrade your PaddlePaddle or PaddleHub according to the form below.
+--------------------------------…
-
请问,在用bert-base-case、chinese-bert-wwm-ext、chinese-roberta-wwm-ext、chinese-roberta-wwm-ext-large这几个预训练模型跑多标签分类实验的时候都没问题,为什么使用roberta-xlarge-wwm-chinese-cluecorpussmall这个预训练模型跑多标签分类实验,在训练过程中一直
accuracy:…
-
@songyouwei chinese-bert-wwm是我找到的不错的中文预训练数据集,但是我在如何调用该数据集上遇到了问题,想请教如何调用,或者如何修改代码使其可以适配到网络中?[https://github.com/ymcui/Chinese-BERT-wwm#%E4%BD%BF%E7%94%A8%E5%BB%BA%E8%AE%AE](url)
-
你好:
我使用同样的数据pipeline训练QA模型,使用bert-wwm的时候可以设置batchsize到12,使用albert-xxlarge-v2只能设置batchsize到6。但是albert-xxlarge-v2的模型文件本身只有900M左右而bert-wwm的模型文件有1400M,请问有什么可能的原因造成这种情况吗?
-
from kashgari.corpus import ChineseDailyNerCorpus
from kashgari.embeddings import BertEmbedding
from kashgari.tasks.labeling import BiLSTM_CRF_Modelb
import kashgari
X_train, y_train = ChineseDa…
-
C:\Users\35845\Desktop\连铸\程序\模型构建\ner命名实体识别\BERT-BiLSTM-CRF\ner>python main.py
['O', 'B-故障设备', 'I-故障设备', 'B-故障原因', 'I-故障原因']
{'O': 0, 'B-故障设备': 1, 'I-故障设备': 2, 'B-故障原因': 3, 'I-故障原因': 4}
C:\Users\35…
-
404 Client Error: Not Found for url: https://huggingface.co//model/chinese-bert-wwm-ext/resolve/main/config.json
Traceback (most recent call last):
File "H:\Anaconda\envs\softmasked\lib\site-packa…