-
There seems to be increasing interest in a plugin that would let JAX models be used to compute forces, the same way the OpenMM-Torch plugin does for PyTorch models. I have no experience using JAX, bu…
-
### Suggestion Description
Dear ROCm developers,
according to some tests I performed, Managed Memory was not really working in ROCm 5.x but it does work at least in ROCm 6.1.2. Is the XLA implemen…
-
```julia
julia> xr = Reactant.ConcreteRArray(rand(Float32, 2, 3))
2×3 Reactant.ConcreteRArray{Float32, (2, 3), 2}:
0.184252 0.863562 0.0996157
0.14061 0.574859 0.236953
julia> Reactant.@…
-
## ❓ Questions and Help
pip install torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl
But got the error:
ERROR: torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl is not a supported wheel on…
-
### Description
I am attempting to add a custom operation using the typed (rather than untyped( XLA FFI api. However, I get a warning/error when trying to use it that that the symbol cannot be found:…
-
## 🐛 Bug
I followed the instructions provided in the [README.md](https://github.com/pytorch/xla?tab=readme-ov-file#gpu-plugin-beta) file and the GPU instructions provided in [this file](https://git…
-
It seems that the loss is not converging or that we OOM depending on the `XLA_DISABLE_FUNCTIONALIZATION` flag and ZeRO-1.
### System info
```
aws-neuronx-runtime-discovery==2.9
libneuronxla==2.0…
-
While building TensorFlow on a big endian system, encountered the following error :
```
ERROR: .cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/external/local_xla/xla/service/cpu/runtime/BU…
-
Originally reported as google/jax#20184 and google/jax#16008:
> ### Description
> When inspecting the estimated flop count of a compiled function, dot_general, einsum, '@', jnp.dot, etc show "-1.0…
-
Hi, I have following setup:
- Transformer model with N layers scanned over input
- fully sharded data parallel sharding
- asynchronous communications (latency-hiding scheduler, pipelined all-gather…