-
# Progress
- [x] Implement TPU executor that works on a single TPU chip (without tensor parallelism) #5292
- [x] Support single-host tensor parallel inference #5871
- [x] Support multi-host ten…
-
## ❓ Questions and Help
Hi All,
I Have this code
```
import optuna
from torch.optim.lr_scheduler import ReduceLROnPlateau
# Assuming dataset is already defined
train_size = int(0.8 * len(da…
-
sg2002的说明文档提到
> 支持主流的神经网络架构: Caffe,Pytorch,TensorFlow(Lite),ONNX 和 MXNet。
**tpu-mlir_quick_start_zh.pdf(发行版本 1.2.103)**仅说明了算能的**CV18xx**和**BM168x**芯片
进一步查找,发现tpu-mlir对于sg系列的支持貌似只有https://github…
-
## ❓ Questions and Help
I'm running this official [script here](https://github.com/pytorch/xla/blob/master/test/test_train_mp_imagenet_fsdp.py), but I only see two xla devices being used, xla:0 and…
-
## Detailed Description
Currently, the graph neural network library dependencies don't support TPUs with pytorch geometric, or don't seem to at least because of custom kernels. We could add a Jax…
-
Current native docker support (#1910 ) doesn't support Google's TPU accelerators. We should add support for docker to use TPU.
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/axolotl-ai-cloud/axolotl/labels/bug) didn't find any similar reports.
### Exp…
-
In short, we observed `mixed_bfloat16` in TPU is slower than `float32` in our model benchmarks. Please refer to this [sheet](https://docs.google.com/spreadsheets/d/1TPwbe8p6eD61arkoIXQnPHf3rgFIDFUZCot…
-
I'm an ML Framework ([GoMLX, for Go language](https://github.com/gomlx/gomlx)) developer using XLA (XlaBuilder for now, but at some point changing to PJRT), and I wanted to run some of my trainers in …
-
This is an amazing library! I was wondering if the library has any support for TPUs? If not how difficult will it get it working in TPUs? I would be willing to contribute to this if its not super time…