pytorch / xla

Enabling PyTorch on XLA Devices (e.g. Google TPU)
https://pytorch.org/xla
Other
2.5k stars 483 forks source link

The ability to exchange between TPU computation and CPU(GPU) computation #4426

Open rwbfd opened 1 year ago

rwbfd commented 1 year ago

🚀 Feature

The ability to exchange between TPU computation and CPU(GPU) computation

Motivation

As far as I am concerned, it is not yet possible to combine a CPU pipeline within the TPU framework. There are two primal examples of this,

  1. In Diffusion Models, random numbers must be generated when using the numerical SDE solver. I am not aware whether TPU could handle a random number generator.
  2. In many CV applications, it is important to generate data augmentations.
steventk-g commented 1 year ago

When necessary, XLA already falls back to CPU instead of TPU to execute computations. Further, you can move tensors back and forth between XLA and CPU devices if you need to explicitly perform computation on CPU.