-
## 🚀 Feature
```python
import torch_xla.utils.serialization as xser
xser.save(model.state_dict(), path)
```
### Motivation
In case where memory is limited compared to the size of t…
-
## ❓ Questions and Help
I use this code
```
import unittest
import torch
import torch.nn.functional as F
import torch_xla
import torch_xla.core.xla_model as xm
class TestInterpolate(unittest.…
-
## 🐛 Bug
The step function inside the `while_loop` operator can't create or reference extra tensors or constants. Doing so crashes Python while lowering.
## To Reproduce
```python
import tor…
-
initial experiments show that modulo some smalllish fixes pytorch XLA could work
```
import pyhf
import torch
import torch_xla
import torch_xla.core.xla_model as xm
spec = {
'channels':…
-
## Fix the Op info test for `nn.functional.feature_alpha_dropout .. nn.functional.grid_sample`
1. Find the lines 223 to 227 of [test_ops.py](test/test_ops.py) and remove
`nn.functional.feature_alp…
-
detail:
NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4
for Kepler GPUs are removed from CUDA 12.x. how can i compile torch_xla for gpu in CUDA Version 12.X(GPU guide use CUDA1…
-
## 🐛 Bug
My training run with PyTorch XLA is running very slowly (on TPU v3-64), and as suggested by the debug messages, I am submitting dumps of the execution graphs and soliciting feedback on how…
-
RNN cannot be jit compiled, see error below:
```
Detected unsupported operations when trying to compile graph __inference_one_step_on_data_993[] on XLA_GPU_JIT: CudnnRNN (No registered 'CudnnRNN' Op…
-
Hi All,
I'm having some trouble running N2V. I have a computer with NVIDIA RTXx A5000 and Ubuntu 18.04.
I use conda to install N2V according to:
`$ conda create -n 'n2v' python=3.7`
`$ s…
-
## Fix the Op info test for `linalg.lu_solve`
1. Find the line 128 of [test_ops.py](test/test_ops.py) and remove
`linalg.lu_solve` from `skip_list`
2. Run op_info test with `pytest test/test_ops.p…
qihqi updated
3 weeks ago