Closed tkddnjs98 closed 1 year ago
Hi, Thanks for your interest in our work. In current open-source code, we extracted pedestrian information from the simulator (for easier deployment) instead of the YOLO&MHT pipeline as in our paper. The environment state we used in our paper consists of the lidar histroical map preprocessed from the lidar sensor, the pedestrian kinematic maps preprocessed from the ZED camera sensor (or the simulator in current code), and the sub-goal point. All data are represented in the robot's local coordinate frame. You can easily transfer lidar data or camera data (pedestrian information) in their coordiante frames into the robot's local coordinate frame using TF tree. In that case, the lidar data and pedestrian information are matched. More details can be found in our paper "DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles".
Thanks for your reply.
Hello Author! I am student who is interested in making obstacle avoidance using reinforcement learning. I saw you used pedestrian information directly from gazebo simulator(ped-simulator), not using YOLO.
Did you generate state using LiDAR, pedestrian information(position, speed), Global navigation goal?
How did you matching LiDAR coordinate and pedestrain information?