TempleRAIL / drl_vo_nav

[T-RO 2023] DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles
https://doi.org/10.1109/TRO.2023.3257549
GNU General Public License v3.0
151 stars 11 forks source link

About State preprocessing #5

Closed tkddnjs98 closed 1 year ago

tkddnjs98 commented 1 year ago

Hello Author! I am student who is interested in making obstacle avoidance using reinforcement learning. I saw you used pedestrian information directly from gazebo simulator(ped-simulator), not using YOLO.

  1. Did you generate state using LiDAR, pedestrian information(position, speed), Global navigation goal?

  2. How did you matching LiDAR coordinate and pedestrain information?

zzuxzt commented 1 year ago

Hi, Thanks for your interest in our work. In current open-source code, we extracted pedestrian information from the simulator (for easier deployment) instead of the YOLO&MHT pipeline as in our paper. The environment state we used in our paper consists of the lidar histroical map preprocessed from the lidar sensor, the pedestrian kinematic maps preprocessed from the ZED camera sensor (or the simulator in current code), and the sub-goal point. All data are represented in the robot's local coordinate frame. You can easily transfer lidar data or camera data (pedestrian information) in their coordiante frames into the robot's local coordinate frame using TF tree. In that case, the lidar data and pedestrian information are matched. More details can be found in our paper "DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles".

tkddnjs98 commented 1 year ago

Thanks for your reply.