Closed han-kyung-min closed 4 months ago
Hello, I believe there is a bug in "navigation.py". Please check lines 196, 197, and 216. At line 216, the "closest_node" is selected from a limited range, typically between 0 and 10. Lines 196 and 197 then update the "start" and "end" with this "closest_node," which is confined within the range of 0 to 10. Consequently, "start" and "end" do not grow over time but are restricted to a certain range. As a result, the program may not correctly select a subgoal from the topological map images at lines 220 or 224.
This bug forces to choose the wrong topomap image, potentially guiding the robot toward a collision. Furthermore, it won't let the robot stop at the goal node (at L231).
I guess you can go ahead close this thread as soon as you confirm this issue.
Best.
Did you encounter any obstacle avoidance problems during your testing? Looking forward to your reply
@han-kyung-min Good catch; I just updated the code to patch this bug: https://github.com/robodhruv/visualnav-transformer/commit/7b5b24cf12d0989fb5b5ff378d5630dd737eec3b. Hopefully, this will resolve a lot of the collision errors, but please let us know if it does not.
Thank you for making the ViNT model available to the public.
While attempting to run the ViNT model, I encountered an issue that I need your help with. I successfully created topological map images and then launched "navigation.py" to drive my robot, following the tutorial provided in this link
I generated the topological map images based on a very simple trajectory, as demonstrated in this folder. My objective is to guide the robot to follow this trajectory.
However, my robot is experiencing inaccurate turning decisions, leading to collisions with walls. You can view the experiment's results in this video . There were human interventions at 43s and 1min 6s to prevent collisions.
I'm wondering if you could offer any advice on what might be missing. I believe I've made all necessary modifications to the names of ROS topics and parameters. ViNT ran zero-shot w/o any extra training.
For your reference, here are the specifications of my robot and sensors:
Robot: A differential drive robot (details available at this link) Camera: ELP USB Fisheye Camera 180 Degree 1080P Lightburn Camera OS/ROS: Ubuntu 20.04 / Noetic GPU (on the PC): 3070ti