-
## Environment info
- `Transformers` version: 4.3.2
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): -
- Using GPU in script?: no
- Usin…
-
I couldn't locate them in the provided documentation, do you mind pointing or linking to them in README?
```
We provide checkpoints for three of the best models fine-tuned on CUAD: RoBERTa-base (~…
-
## Environment info
- `transformers` version: 4.4.dev0
- Platform: Ubuntu 18
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1 (YES)
- Tensorflow version (GPU?):
- Using GPU in script?: …
-
Hello,
It seems like some of the weights were renamed/shaped in the V2 model releases and I couldn't quite figure out how to map them to the old structure
```
# it seemed like
pos_q_proj => q…
-
This seems like it would be really cool to use but unfortunately the example code in the readme is no longer valid, and I've been struggling for a long time to get this thing to work. It seems "NLITop…
-
Hi
Based on readme on [1], run_langauge_modeling.py does not support T5 model so far, it would be really nice to include this model as well.
There is also this line "data_args.block_size = tokenizer…
ghost updated
3 years ago
-
On Colab, I did:
```
!pip install transformers
```
Then:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-xxlarge-v2")
…
-
Thanks for sharing the repo. However, I could not access the per-trained base and large models from the below paths.
https://github.com/microsoft/DeBERTa/releases/download/v0.1/base.zip
https://gi…
-
## Environment info
- `transformers` version:
- Platform: macos
- Python version: 3.8.3
- PyTorch version (GPU?): no
- Tensorflow version (GPU?): no
- Using GPU in script?: no
- Using dis…
-
## Environment info
- `transformers` version: 4.0.0
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.7
### Who can help
@BigBird01 @LysandreJik
## Information
I'd l…