-
could i ask you about which of the pretrained model of offical bert you used, cuz i use the wwm_uncased_L-24_H-1024_A-16 model, and easily got an error of OOM.
yygle updated
4 years ago
-
I want to do classification with Chinese texts. Though there is a BERT Chinese pre-trained model by default, it's quite out-dated. I would like to select different pre-trained models, like wwm BERT, X…
-
**Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [y ] Are you running the latest `bert-as-service`?
* [ y] Did you follow [the installation](https://github.com/hanxiao/bert-a…
-
在pytorch版本中,载入bert-wwm,chinese的模型,调用 tokenizer.tokenize,得到的仍旧是以字为单位的分割,这个是否会导致使用的时候输入和模型不匹配,毕竟模型是wwm的
-
Google recently released two new BERT models with Whole Word Masking strategy (BERT-Large(Base), Uncased (Whole Word Masking)). Do you have a plan to pre-train new NCBI models based on this new relea…
-
Can you provide the best learning rate for different tasks with Roberta? I can not find this in the technical report.
-
Traceback (most recent call last):
File "run_classifier_serving.py", line 1087, in
tf.app.run()
File "/data/aif/common/anaconda/envs/py3nlp_todd/lib/python3.6/site-packages/tensorflow/pyth…
-
## ❓ Questions & Help
This is the padding problem. In GLUE codes in the examples, the padding for XLNet is on the left of the input, but in Squad codes, the padding is on right. I was wondering…
-
## ❓ Questions & Help
I tried fine-tuning BERT on squad on my local computer. The script I ran was
```
python3 ./examples/run_squad.py \
--model_type bert \
--model_name_or_path bert-…
-
文档中:
- embeddings
- chinese_L-12_H-768_A-12/(取谷歌预训练好点的模型,已经压缩上传,
keras-bert还可以加载百度版ernie(需转换,[https://github.com/ArthurRizar/tensorflow_ernie](https://githu…