TempleRAIL / drl_vo_nav

[T-RO 2023] DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles
https://doi.org/10.1109/TRO.2023.3257549
GNU General Public License v3.0
113 stars 7 forks source link

Pedestrian collision and scene file. #17

Closed YumengXiu closed 5 months ago

YumengXiu commented 5 months ago

@zzuxzt , Hi! Thanks for the great work! I have a question about the robot model used in Gazebo, I believe the simulator consider the collision between human actor and robot, how to solve the pose drift when robot is in collision with the simulated agents? Also, apart from generating maps from gazebo, to visualize the map in rviz, how do you create the map and scene xml file?

Thanks for your help!

zzuxzt commented 5 months ago

Hi, thanks for your interest in our work.

  1. The amcl localization package will help the robot recover from small pose drifts (in amcl's relocation tolerance). If the robot flips over, then you need to reset the Gazebo world.
  2. The gmapping package can help create maps for Gazebo worlds. But you need to create scene XML files manually.
YumengXiu commented 5 months ago

Thanks for your reply! For scene XML files generation, which tools do you use? Since I would like to customized my own world and create the XML file.

zzuxzt commented 5 months ago

Sorry, I didn't use any tool but just created it by hand. But you can try to see if any tools can free your hands.

YumengXiu commented 3 months ago

Sorry for reopening this thread, in the turtlebot gym environment drl_nav_env.py, I see you set the initial pose by creating 18 situations, and the init model state is different from the initial position in rviz, is it completely random, and does the initial position have to be exactly the same as the initial state?

zzuxzt commented 3 months ago

Since it is difficult to figure out the coordinate transformation relationship between the robot's pose in the Gazebo world coordinate frame and the map coordinate frame, I manually set these 18 initial states to initialize the starting point of the robot, and then a random goal point based on these 18 initial states will become the new random start point of the robot. In this way, the robot can obstain near completely random start/goal points during the training. Note that due to the unknown coordinate transformation relationship, the initial model state should definitely be different from the initial position in Rviz.