-
## Environment info
- `transformers` version: 4.5.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.0
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not …
-
## Environment info
- `transformers` version: 4.0.0-rc-1
- Platform: Linux-4.19.0-12-amd64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow versio…
-
## Environment info
- `transformers` version: 4.1
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?):-
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using…
-
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): MBART
Language I am using the model on (English, Chinese ...): English, Romanian
The problem arises when using:
* [ ] the offici…
-
# ❓ Questions & Help
## Details
I use the following code in https://huggingface.co/microsoft/unilm-base-cased to load the model.
```
from transformers import AutoTokenizer, AutoModel
to…
AI678 updated
3 years ago
-
## Environment info
- `transformers` version: NA
- Platform: NA
- Python version: NA
- PyTorch version (GPU?): NA
- Tensorflow version (GPU?): NA
- Using GPU in script?: NA
- Using distribu…
-
## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-91-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0a0+1606899 (True)
- T…
-
I'm trying to run the model with given weights but it keeps showing error. I am working on google colab. I have tried removing --cuda-device but error still remains. I have tried searching on stackove…
-
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Reformer tokenizer
## To reproduce
Steps to reproduce the behavior:
1. Try to load the pretrained reformer-enwik8 tokenizer wi…
-
## Environment info
- `transformers` version: 4.3.3
- Platform: Linux-5.4.0-65-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorfl…