-
## 🐛 Bug
[pip 24.1 deprecated legacy version identifiers](https://pip.pypa.io/en/stable/news/#deprecations-and-removals) and no longer allows installing the current nightly wheels directly. Other p…
-
## 🐛 Bug
Running into the following error when using `torch.compile(backend="openxla")`:
```
File "torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*a…
-
## 🐛 Bug
Running the upstreamed benchmarking scripts with the following command results in an unexpected error. It does work when using CPU OpenXLA fallback, though.
```bash
python xla/benchmar…
-
Hi, fantastic work with NATTEN. Is support for XLA on your roadmap? It would be great to enable neighborhood attention on TPUs and other non-Nvidia GPUs.
-
Thank you for the repo.
I am wondering if a recipe for TPU pods can be added. I have access to v4-32 and want to train a LLaMA model from scratch. Wondering if the repo can be extended for this us…
-
Hi,
The XLA:GPU profiler has a segfault bug when CUPTI initialization failed:
```
Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x00007fff0401cc7e in nsync::nsync_mu_lock(nsyn…
-
### Bug description
When configuring a `DDPStrategy` with multiple devices that do not use the `torch.cuda` API, we trigger the following exception:
```python
File "/home/hpclee1/rds/hpc-work/.…
-
I'm trying to build xla from source for CPU following the instructions [here](https://github.com/openxla/xla/blob/main/docs/developer_guide.md) and it's failing with:
```
xla/service/gpu/runtime/…
-
In the following MWE
```cpp
xla::XlaBuilder root("root");
auto zero = xla::ConstantR0(&root, 0);
absl::Span xs = {0};
auto zeros = xla::ConstantR1(&root, xs);
…
-
## 🐛 Bug Report
When using [dynamo sharding](https://github.com/pytorch/xla/blob/88bcb45fda546e5c1fb4f12de75251bfa5fd332e/torch_xla/core/custom_kernel.py#L17) inside `torch.compile`, I encounter th…