-
Hi, there's something strange with the model using transformers library:
```
In [5]: tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
In [6]: tokenizer.model…
-
## Environment info
- `transformers` version: version: 4.3.3
- Platform: Linux-4.15.0-29-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
…
-
**Describe the bug**
I have downloaded the Chinese Bert model and the pre-trained model for MultiWOZ_zh. There is a mismatch in the joint accuracy.
**To Reproduce**
Steps to reproduce the behavior…
-
你好,我们的BERT-wwm-ext(tf)模型做ner和情感分类任务,在相同的配置也指定了随机种子的情况下,两次运行结果不同,换了google的BERT或roBERTa都不会出现这个问题。用部分样本训练发现,样本不多(20条)时结果是一样的,但训练样本大以后,结果就不同了。不知道什么原因,谢谢你的解答。
sirlb updated
3 years ago
-
train.sh模型找不到
-
-
多谢
-
## Environment info
- `transformers` version: 3.5.1
- Platform: Linux-5.4.0-1029-gcp-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow versio…
-
* Contextualized Topic Models version: 1.8.2
* Python version: 3.6
* Operating System: Windows 10
### Description
Hey guys... I'm trying to use CTM's for Topic Modeling answers of a survey. Th…
-
I used the parameters showed on your paper.
![image](https://user-images.githubusercontent.com/70563473/112941484-4034f600-9161-11eb-9783-1481000b3f0d.png)
pre-trained BERT:Chinese with Whole Word…