-
# Context
## Objective
In this RFC I will talk about the roadmap to enable eager mode as the default computation mode for PyTorch/XLA users and how to enable graph compilation in this mode.
…
-
## ❓ Questions and Help
I use this code
```
import unittest
import torch
import torch.nn.functional as F
import torch_xla
import torch_xla.core.xla_model as xm
class TestInterpolate(unittest.…
-
Hello,
I have been delving into the XLA project recently and have a few inquiries regarding accessing MHLO from the XLA compiler. The XLA compiler exhibits a broad array of optimization items, and …
-
Recently, we added a configuration to enable XLA on the Windows platform(https://github.com/openxla/xla/pull/11299). Next, to provide comprehensive guidance on building XLA from source and running XL…
-
**Describe the bug**
Training BERT using Keras NLP is significantly slower due to the `keras.layers.Embedding` not being XLA compatible by default on TensorFlow GPU. This is similar to an issue repor…
-
Hello,
I'm very new to the XLA project, so pardon my ignorance here. I'm trying to learn more about the project, optimizations, and their interactions with the hardware. The code everywhere seems w…
-
Previous XLA custom call API versions pass parameters as `void** buffers`. The new version, [typed FFI](https://github.com/openxla/xla/tree/main/xla/ffi), allows passing metadata such as data type and…
-
I'm scratching my head at the following case, where we're [trying](https://github.com/conda-forge/tensorflow-feedstock/pull/385) to build the most recent tensorflow in conda-forge (tensorflow in itsel…
-
For example, source code compilation:
The repository documentation tells me: ```docker run --gpus all --name xla_gpu -w /xla -it -d --rm -v ./xla:/xla tensorflow/build:latest-python3.9 bash```
…
-
## 📚 Usability / API / Documentation
Calling `get_ordinal` and `world_size` (1) are inconsistent in the use of their underlying library (2) are being migrated from `xla_model` to `runtime`.
(1) …