-
## ❓ Questions & Help
I tried fine-tuning BERT on squad on my local computer. The script I ran was
```
python3 ./examples/run_squad.py \
--model_type bert \
--model_name_or_path bert-…
-
Traceback (most recent call last):
File "run_classifier_serving.py", line 1087, in
tf.app.run()
File "/data/aif/common/anaconda/envs/py3nlp_todd/lib/python3.6/site-packages/tensorflow/pyth…
-
## ❓ Questions & Help
This is the padding problem. In GLUE codes in the examples, the padding for XLNet is on the left of the input, but in Squad codes, the padding is on right. I was wondering…
-
文档中:
- embeddings
- chinese_L-12_H-768_A-12/(取谷歌预训练好点的模型,已经压缩上传,
keras-bert还可以加载百度版ernie(需转换,[https://github.com/ArthurRizar/tensorflow_ernie](https://githu…
-
请问bert-wwm-ext有对下游任务的结果汇报吗
我看到readme.md里提供了一些但是不完整,请问有完整的汇报吗
以及会有ext的论文发出来吗,谢谢
-
ValueError: Couldn't find 'checkpoint' file or checkpoints in given directory chinese_roberta_wwm_ext_L-12_H-768_A-12
尝试用 https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_tra…
-
您好,我有个疑问,就是roberta原文中采用的是BPE Tokenizer,然而好像SCIR中的roberta-wwm好像仍然采用的是bert的一系列(Bertconfig, BertTokenizer,就是hugginface pytorch_tranformer里面的类),所以模型的整体依旧是bert-wwm,只是训练的方式仿照roberta中训练的吗
-
From [here](https://arxiv.org/pdf/1906.08237.pdf) on page 16, it seems we should set Layer-wise lr decay to 0.75. However, I didn't find a way to do so in `run_squad.py`. Could someone provide a sampl…
-
在使用pytorch_transfomers加载chinese_roberta_wwm_ext_pytorch报错。( 使用【RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer】加载)
-
bert-wwm-ext就可以正常使用,这个不可以。