This commit adds the MoMa-LLM paper, which describes multi-room mobile manipulation from language prompts based on scene graphs. MoMa-LLM enables robotic navigation, exploration, and articulated object manipulation, which is demonstrated on the task of interactive object search.
This commit adds the MoMa-LLM paper, which describes multi-room mobile manipulation from language prompts based on scene graphs. MoMa-LLM enables robotic navigation, exploration, and articulated object manipulation, which is demonstrated on the task of interactive object search.