-
## Fix the Op info test for `bincount`
1. Find the line 17 of [test_ops.py](test/test_ops.py) and remove
`bincount` from `skip_list`
2. Run op_info test with `pytest test/test_ops.py`
3. Fix the f…
qihqi updated
1 month ago
-
## Fix the Op info test for `exponential`
1. Find the line 60 of [test_ops.py](test/test_ops.py) and remove
`exponential` from `skip_list`
2. Run op_info test with `pytest test/test_ops.py`
3. Fix…
qihqi updated
1 month ago
-
I am trying compile LLaVA 1.5 7B to Neuron. As far as I can tell, the way to do this is to select some specific inputs and then trace the model execution with those inputs. However, when I try to trac…
-
Hello XLA team,
I am trying to control a single compiler pass in XLA and check the dot graph before and after the applied compiler optimization. I am running a simple matmul operation for this. For…
-
Hi,
I'm still following the tutorial. I'm trying to do the calculations with the predefined LiH molecule. This is my code which I basically copied from the [tutorial](https://deepqmc.github.io/tuto…
-
## Description
We need to implement a testing framework for the TensorFlow XLA (Accelerated Linear Algebra) compiler model. This framework should comprehensively test the XLA compiler, ensuring it …
-
## 🐛 Bug
## To Reproduce
Here is a short example to reproduce the error, running on vp-16 TPU pod:
```
import numpy as np
import torch_xla.core.xla_model as xm
import torch_xla.runtime…
-
## 🚀 Feature
Right now we can run XLA GPU training by setting the `GPU_NUM_DEVICES` env variable (and unsetting `XRT_TPU_CONFIG`).
@jysohn23 noticed code for multi-node XLA GPU training, e.g. [thi…
-
## Fix the Op info test for `linalg.ldl_solve`
1. Find the line 123 of [test_ops.py](test/test_ops.py) and remove
`linalg.ldl_solve` from `skip_list`
2. Run op_info test with `pytest test/test_ops…
qihqi updated
1 month ago
-
xla is very fast, but it requires padding to respond to changes in online service requests