-
Similar to the backport request thread in 2.2 release https://github.com/pytorch/xla/issues/6036
The issue is to track 2.5 release backport.
For any PRs you want to backport to 2.5, please reply…
-
I'm currently building a library in Rust that leverages XLA and PJRT and while going through the APIs and the implementation of the Python bindings, I'm a little confused about the terminology and the…
-
Hi,
Our (@zml) Llama implementation fails to compile if we run with `topk > 1`.
We're not sure what triggers it, it looks like some pattern matching.
Please find attached our implementation wit…
-
There is a performance regression in BP on CPU from JAX 0.4.31 to 0.4.32. The reason seems to be the new CPU backend with increased concurrency (https://github.com/google/jax/issues/23590).
Defaul…
-
### Feature
Automatic mixed precision for xla has landed in [pytorch 1.8.1](https://github.com/pytorch/pytorch/releases/tag/v1.8.1) and torch/xla nightly.
We should enable it in `create_supervised…
-
## 🐛 Bug
Running the upstreamed benchmarking scripts with the following command results in an unexpected error. It does work when using CPU OpenXLA fallback, though.
```bash
python xla/benchmar…
-
Hello,
We saw the issue that a broadcast tensor from a single-dimension parameter is marked sharded by XLA sharding propagator. This sharded tensor, while doing computation with other tensor which ha…
-
## Fix the Op info test for `nn.functional.feature_alpha_dropout .. nn.functional.grid_sample`
1. Find the lines 223 to 227 of [test_ops.py](test/test_ops.py) and remove
`nn.functional.feature_alp…
qihqi updated
1 month ago
-
## 🚀 Feature
An option to train models on a TPU or TPU pod using the `torch_xla` package.
## Motivation & Examples
Motivation: speed up training, utilize best available resources.
Example: i…
-
Indeed, Tensorflow==2.16.1 works even for me. I tried to read the source for tensorflow==2.17.0 and 2.16.1 to at least try find out what might be the issue. The following is what I found out:
The e…