Closed c0g closed 2 months ago
This is a good question. You're right - the culprit here is mjx.get_data
- any scenario where you are copying data to host and then back to GPU to render is going to kill throughput.
Doing this right requires building interfaces to allow MJX and a renderer to share the same GPU buffers. The most promising avenue for this is Madrona, and we are experimenting with them on ways to integrate to MJX, but it's early days.
We are also working on an simple depth renderer that would be native to MJX using mjx.ray
, however I suspect the Madrona integration to be both faster and more powerful. Stay tuned, this is actively under development.
Thanks for the answer :-) glad to hear it’s being worked on. Anything early I can play with?
I have a depth camera implementation using mjx.ray
from a personal project, that I discuss here. It renders quite quickly and I've successfully used it for vision-based policy learning (not quite RL; it uses Analytical Policy Gradients so it only requires <100 envs)
Hi,
I'm a engineer and I'm trying to use MuJoCo for fun.
I'm looking for some help with rendering simulations in MJX. My model is extremely simple so this might be part of the issue, e.g. I'm just not doing enough compute (model at the bottom of this post).
I've verified that I'm definitely using EGL. The time to render a single scene is fine, it's my method for render all 4096 scenes that's slow.
My current sim code:
When I have a lot of parallel sims, the copy and for loop can take a good amount of time - I can mitigate this slightly using a background thread especially if I'm simulating at 1 khz and rendering at 10 hz. Is there a way I can avoid doing this copy and loop? It feels like it should be possible since the data is on the GPU which is doing the rendering.
Model: