-
## Environment info
- `transformers` version: `nightly`
- Platform: PyTorch
- Python version: 3.6
- PyTorch version (GPU?): TPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
-…
-
Hi im trying to run the `train_distil_marian_enro_tpu.sh` example in collab/kaggle tpus and for some reason it gives me the following output:
@sshleifer
```
Exception in device=TPU:0: Cannot acces…
-
Hi
I have made the relevant changes to the trainer.py by inserting ```num_tpu_cores = 8 ``` and exported the ```XRT_TPU_CONFIG```. I can also confirm that the train.py executes the TPU as per the …
-
## Environment info
- `transformers` version: `'4.2.0dev0'`
- Platform: Debian
- Python version: `Python 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21)`
- PyTorch version (GPU?): `to…
-
**Describe the bug**
Training any model does not work with TPUs due to an error with the way `modelPT.py` calculates `optim_config['sched']['t_num_workers']` [here](https://github.com/NVIDIA/NeMo/b…
-
## 🐛 Bug
## Please reproduce using the BoringModel
Modified BoringModel.ipynb to .py, add tpu_cores=8 to Trainer.
While running code on Google Cloud TPU VM Pod v3-8 successfully runs,
pr…
-
## 🐛 Bug
### To Reproduce
Steps to reproduce the behavior:
1. Open lightning_mnist_tpu.ipynb
2. Run the code
### Expected behavior
The code runs normally and faster than GPU.
…
-
`import torch
### cpu
input = torch.zeros(8, 3, 7, 10, dtype=torch.float)
src = torch.randn(8, 3, 7, 10, dtype=torch.float)
index = torch.randint(0, 3, [8, 3 ,7, 10])
dim = 1
input.scatter_(di…
-
## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflo…
-
## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tens…