mihdalal / planseqlearn

[ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks
https://mihdalal.github.io/planseqlearn/
55 stars 4 forks source link

How were the figure images rendered #4

Closed MandiZhao closed 4 months ago

MandiZhao commented 4 months ago

The renderings on the project website looks really good, a lot better than the other robosuite/metaworld figures I've seen. Just curious whether some external rendering engine was used. Thanks!

mihdalal commented 4 months ago

Thanks! Yes, I extended the NVISII rendering code that is present in robosuite (https://github.com/ARISE-Initiative/robosuite/tree/master/robosuite/renderers/nvisii) to support metaworld, d4rl kitchen, and obstructed suite. The basic idea is to just save the mujoco states from the video and then load them into NVISII and run ray-tracing based rendering. My code for doing this is very hacky and hardcoded, at the moment, but I might release a nicer version of it at some point. If you want my current version, shoot me an email and I can send it to you!

MandiZhao commented 4 months ago

Really cool, thanks!!