-
Below is my test code for English NER.
The code can run correctly without any error. But it costed 2 hours and still not exit.
```
import kashgari
from kashgari.embeddings import BertEmbeddi…
-
Hi, I am trying to run basic masked word prediction in pytorch transformers to compare `BERT-large-uncased-WWM` and `COVID-Twitter-BERT` for a publication.
```
from transformers import pipeline, A…
-
Using TensorFlow backend.
--2020-07-28 11:21:05-- https://storage.googleapis.com/hfl-rc/chinese-bert/chinese_wwm_L-12_H-768_A-12.zip
Resolving storage.googleapis.com (storage.googleapis.com)... 34.…
-
请问使用Chinese-BERT-wwm做fine-tuning的时候需要用LTP做分词吗?
-
大神,我想问下,electra-small, electra-large预训练的训练数据大概是多少?
-
## 🐛 Bug
Model: Bert (bert-large-uncased-whole-word-masking)
The problem arises when using:
The official example script for finetuning on squad data:
```
python -m torch.distributed.launch --…
-
I've followed some examples in transformers with no success:
```
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.fro…
-
Hola,
Al intentar utilizar los pipeline de transformers en el caso de 'fill-mask' si que funciona bien:
`nlp_fill = pipeline('fill-mask', model="dccuchile/bert-base-spanish-wwm-cased", tokenizer="…
-
Hello, thank you so much for your sharing. But When I test the converted ERNIE model using [pytorch-transformers](https://github.com/huggingface/pytorch-transformers), the performance on cloze task is…
-
我理解你们在预训练时用的时LTP分词器。但是我用你的模型微调时也需要用LTP分词器么?我用transfomers这个库调用这个无论是你们的每一个模型的时候,要不就是还是用的和BERT base一样的基于字的分词器,要不就是说找不到模型对应的vocab.json。请问是Bug么?