OmniGibson: a platform for accelerating Embodied AI research built upon NVIDIA's Omniverse engine. Join our Discord for support: https://discord.gg/bccR5vGFEx
Hi guys, I am very excited for the OmniGibson simulator and want to try instruction following task in the platform. Therefore, I am wondering if there are some already created high-level actions like 'open', 'switchon' for interacting objects. I only find the action space of a type of robot is a vector to control the low-level motion.
I found 'symbolic_semantic_action_primitives' and 'starter_semantic_action_primitives' in the doc. I wanna know the difference between these two and it seems that the first one supports more high-level actions
Hi guys, I am very excited for the OmniGibson simulator and want to try instruction following task in the platform. Therefore, I am wondering if there are some already created high-level actions like 'open', 'switchon' for interacting objects. I only find the action space of a type of robot is a vector to control the low-level motion.
I found 'symbolic_semantic_action_primitives' and 'starter_semantic_action_primitives' in the doc. I wanna know the difference between these two and it seems that the first one supports more high-level actions