StanfordVL / OmniGibson

OmniGibson: a platform for accelerating Embodied AI research built upon NVIDIA's Omniverse engine. Join our Discord for support: https://discord.gg/bccR5vGFEx
https://behavior.stanford.edu/omnigibson/
MIT License
463 stars 51 forks source link

examples/observations does not exist #733

Closed ZZWENG closed 2 months ago

ZZWENG commented 4 months ago

Hi, The README under examples folder says that there is a observations folder. But it does not exist. Could you point me to where it is?

Also, is there any demos that shows how to render and save a video of the scene on a headless server?

wensi-ai commented 4 months ago

Hi, apologies for the confusion, the README under examples hasn't been updated for a while. There is actually no observation folder.

For rendering sensors you can take a look at the sensor module documentation. Basically you can create a VisionSensor and get the rendered image with get_obs() at each timestep.

ChengshuLi commented 2 months ago

Apologies! That README is completely outdated. Will remove it in the next release.

Regarding your question about how to render and save a video of the scene on a headless sensor, you can also use the default viewer camera.

# By default, images are of shape (720, 1280, 4)
img1 = og.sim.viewer_camera.get_obs()[0]["rgb"]
og.sim.viewer_camera.set_position_orientation(new_pos, new_orn)
og.sim.render()
img2 = og.sim.viewer_camera.get_obs()[0]["rgb"]
...
# Save img1, img2, ... into a video.