-
Any help would be appreciated.
I am running on two PC with Windows 11 and Windows 10, each with Nvidia GPU, and facing same issue on both.
All installation was done correctly but keep getting "No…
-
### Description
Hello, I'm running into a core dump when writing TPU kernels. I was testing with interpret on, and the kernel was working. Without it, I get a core dump. Any temporary fix is apprecia…
-
Does OpenNMT-tf support training on Cloud TPUs?
-
### 🚀 The feature, motivation, and pitch
trlX uses HuggingFace accelerate under the hood. Accelerate has the capability to leverage Google's TPUs for faster training. I'm interested in supporting trl…
-
Hello! I am training the first two knowledge distillation stages of Mamba 2 on one DGX-H100x8 node, and I am experiencing train times of ~8 hours for the first stage, and ~13 hours for the second stag…
-
## 🐛 Bug
Trying to test simple `xm.send` and `xm.recv` gives error.
## To Reproduce
Steps to reproduce the behavior:
1. Run test code below
```
import torch
import torch_xla.core.xl…
-
Hello everyone, practitioner here,
I am looking to train a very serious non-LLM model, and the training is expected to be very hard, so I am looking for maximum speed.
I know that Google's TPUs …
-
Is there a way to use a tensor processing unit for acceleration? If not, is this feature going to be added in the future?
-
It's my issue:[https://github.com/tensorflow/tensorflow/issues/37054](url)
-
Although TPUs are way overkill on a model of textgenrnn's size, Colaboratory offers to train using them for free, so support might be good.
It should just be a simple flag addition.