-
# Context
## Objective
In this RFC I will talk about the roadmap to enable eager mode as the default computation mode for PyTorch/XLA users and how to enable graph compilation in this mode.
…
-
Hi, thank you so much for releasing this wonderful codebase. When I'm trying to run pretrain_llama_7b on some v3-tpu pod, I got this error:
```
ERROR: Accessing retired flag 'jax_enable_async_collec…
-
## 🐛 Bug
The following [GPT-2 code](https://github.com/miladm/build-nanogpt-ptxla/commit/4135bfc7fc577af93b88e31e0228f2d4fb9a775d) OOMs on TPU v4-8 with >4 attention layers (i.e. `n_layer > 4`). Th…
-
Hi,
I noticed a problem with the time grid in euler_sample when XLA compilation (with dynamic shape for times) is used.
https://colab.research.google.com/drive/1kg8RChmJ3TxdONBHE1lIdCAmZK…
-
For example, source code compilation:
The repository documentation tells me: ```docker run --gpus all --name xla_gpu -w /xla -it -d --rm -v ./xla:/xla tensorflow/build:latest-python3.9 bash```
…
-
## Description
XLA is an abstraction layer of computation graph for better efficiency, consistency, portability, and a lot as [they claim](https://www.tensorflow.org/xla/architecture). However, the m…
-
I tried just running the livebook. for context, I have a 4090 that is also running X windows for three monitors, so it has around 22GB of VRAM available. When I do this, it spikes and throws an error:…
-
This is the error I get after running a simple tensorflow command:
`In [3]: import tensorflow as tf
In [4]: print(tf.reduce_sum(tf.random.normal([1000, 1000])))
2021-02-17 08:10:00.365011: I tensor…
-
## ❓ Questions and Help
I use this code
```
import unittest
import torch
import torch.nn.functional as F
import torch_xla
import torch_xla.core.xla_model as xm
class TestInterpolate(unittest.…
-
## 🐛 Bug
The step function inside the `while_loop` operator can't create or reference extra tensors or constants. Doing so crashes Python while lowering.
## To Reproduce
```python
import tor…