isaac-sim / IsaacLab

Unified framework for robot learning built on NVIDIA Isaac Sim
https://isaac-sim.github.io/IsaacLab
Other
2.21k stars 911 forks source link

[Question] Using Jax #12

Closed jrabary closed 1 year ago

jrabary commented 1 year ago

Is it possible to use Jax as learning framework without any performance/speed of simulation loss ?

Mayankm96 commented 1 year ago

Natively, Isaac Sim supports numpy and torch backend, while Orbit right now mainly uses torch since that is more useful for GPU parallelization. Additionally, for small-dimension vectors, there isn't a lot of overhead moving them from torch to numpy when needed (such as for ROS). Our future plan is to support warp backend since that would simplify the processing of large image observations in a batched manner.

We haven't yet investigated using Jax. From what I saw here, there isn't any Pytorch<->Jax operations available right now. It would definitely be useful for researchers since Jax has its benefits.

Are there any Gym environments that are supporting Jax? We can take a look and see how much effort this would take.

StoneT2000 commented 1 year ago

Some gym envs in jax: https://github.com/RobertTLange/gymnax Then there's also Brax

For converting jax to torch tensors directly on cuda, you can use dlpack as so

import jax
import jax.dlpack
import torch
import torch.utils.dlpack

def jax_to_torch(x):
    return torch.utils.dlpack.from_dlpack(jax.dlpack.to_dlpack(x))
def torch_to_jax(x):
    return jax.dlpack.from_dlpack(torch.utils.dlpack.to_dlpack(x))

a = torch.tensor([1,2,3]).cuda()
a_jax = torch_to_jax(a)
print(a_jax)
# both a and a_jax are on GPU
Mayankm96 commented 1 year ago

Nice! Thanks a lot for pointing this out @StoneT2000

Is there significant overhead in these conversions when dealing with large tensors? If not, then I am happy to have an interface that converts the environments to be compatible with JAX-based libraries.

StoneT2000 commented 1 year ago

Maybe? I myself haven't tested it just yet at scale altho I plan to soon, would be nice to see a speed test

Mayankm96 commented 1 year ago

I tried to make a script out of the code mentioned above but haven't been able to run it. Always get some ModuleImportError when loading jax. Currently, there are other tasks queued up for us which are on a higher priority.

I am leaving the script here for someone to try it out :)

import torch
import torch.utils.dlpack
import jax
import jax.dlpack

import time

# A generic mechanism for turning a JAX function into a PyTorch function.

def j2t(x_jax):
  x_torch = torch.utils.dlpack.from_dlpack(jax.dlpack.to_dlpack(x_jax))
  return x_torch

def t2j(x_torch):
  x_torch = x_torch.contiguous()  # https://github.com/google/jax/issues/8082
  x_jax = jax.dlpack.from_dlpack(torch.utils.dlpack.to_dlpack(x_torch))
  return x_jax

# time the JAX version
x = torch.randn(2048, 512, 4).cuda()
start = time.perf_counter()
for _ in range(100):
  y = t2j(x)
print(f"JAX time    : {(time.perf_counter() - start) / 100:.3f} ms")

# time the PyTorch version
x = jax.random.normal(jax.random.PRNGKey(0), (2048, 512, 4))
start = time.perf_counter()
for _ in range(100):
  y = j2t(x)
print(f"PyTorch time: {(time.perf_counter() - start) / 100:.3f} ms")