-
There seems to be increasing interest in a plugin that would let JAX models be used to compute forces, the same way the OpenMM-Torch plugin does for PyTorch models. I have no experience using JAX, bu…
-
Here is the repro.
```
import os
XLA_FLAGS = [
"--xla_dump_to=/tmp/hlos",
"--xla_gpu_enable_latency_hiding_scheduler=true",
"--xla_gpu_enable_triton_gemm=false",
"--xla_gpu_…
-
### Description
I used
```
python3 -m pip install --upgrade "jax[cuda12]"
```
to install JAX on a GPU node, but am getting a `CUDA_ERROR_SYSTEM_NOT_READY` error:
```
(base) $ python3 -c "import…
-
## 🐛 Bug
## To Reproduce
Here are two scripts for the experiment
test1.py
```
import torch
import torch_xla.core.xla_model as xm
import math
random_k = torch.randn((100, 100), dtype=…
-
## 🐛 Bug
I encountered some correctness issues while using Dynamo + OpenXLA. After investigation, I found that Torch-XLA currently generates the same hash for two different graphs when computing th…
-
## Describe the bug
Using XLA interface crashes program.
## To Reproduce
Run the example code in [envpool/examples/xla_step.py](https://github.com/sail-sg/envpool/blob/main/examples/xla_step.…
-
## ❓ Questions and Help
I'm trying to implement an in-place operator using pallas, and wrap it as a torch custom op. However, I found it difficult to make it work with `torch.compile`. More specifi…
-
Click to expand!
### Issue Type
Bug
### Source
source
### Tensorflow Version
tf2.11
### Custom Code
Yes
### OS Platform and Distribution
Linux Ubuntu 20.04
### Mobi…
-
## 🐛 Bug
Here at AWS we have a single PJRT device plugin for both PyTorch and JAX, and recently we've made implements to our device plugin to make it work better with JAX. I.e. now `PJRT_LoadedExec…
-
### Description of the bug:
When I convert a pytorch model containing a MaxPool2D module, `ai_edge_torch.convert` crashes. This can be reproduced on my setup using the following minimal repro:
``…