sled-group / navchat

Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper:https://arxiv.org/abs/2310.07968 Video:https://www.youtube.com/watch?v=rN5S8QIhhQc
MIT License
18 stars 0 forks source link

transfer to real robot #2

Closed zhouak47 closed 1 month ago

zhouak47 commented 2 months ago

Thanks for your reply. I have successfully run the code using HM3D dataset.Now i want to use orion to actual robots ,and i am confused about this.Can you offer me the code you use in real robots and give some advise?Thanks a lot.

YinpeiDai commented 2 months ago

Hi, we're still oragnizing that real robot code, I can give you some details on that.

For real robot, we make serveral simplication as follows:

  1. Instead of building VLMap, we use a topology graph to store the objects. For each scene, we use 7~12 predefined viewpoints, so when the robot called frontier-based exploration, we just do DFS in the topograph, and use groudingSAM to detect the objects during the movement. Once the object is found, the object will be updated in the topograph. Some landmark objects are also stored in the topograph.
  2. When a previous found object is asked again, we retrieve from the memory, find out the corresponding viewpoint, then directly call goto_viewpoint() to navigate the robot there. We build a room map with GMapping before the experiment, so the robot can move to a specific pose by using move_base without collision.

Hope this can help you!

zhouak47 commented 1 month ago

Thanks for your kind reply