reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
487 stars 97 forks source link

About 3D-LiDAR Sensor #150

Open namjiwon1023 opened 3 days ago

namjiwon1023 commented 3 days ago

Thank you for your contribution!

I have two questions regarding 3D LiDAR:

  1. In the environmental code, when processing the point cloud, data with a Z-axis value greater than -0.2 is required. If I raise the position of the 3D LiDAR from 0.23 to 0.35, do I need to change the -0.2 to -0.3?

  2. Regardless of whether the input is 180° or 360°, by using the self.garps angle range limitation, it is possible to output the smallest 20 values in the 180° directly in front. Is that correct? Because I adjusted the 3D LiDAR's range to 360°, but I only need the output of the minimum values for the front 180°. I was reviewing the callback code and found that normalization is done using the angle between the robot's origin and the point cloud. So, I want to confirm this.

Thank you.

reiniscimurs commented 3 days ago

Hi

  1. You are correct, generally we want to filter out the floor plane so if we place the sensor higher we should decrease the z axis value. This is slightly described in the turtlebot branch: https://github.com/reiniscimurs/DRL-robot-navigation/blob/Noetic-Turtlebot/TD3/velodyne_env.py#L24-L26
  2. Yes, we check the angle to figure out the which gap does the reading belong to. It should still work for 360 fov but would be good to double-check. Mind you, it would be rather suboptimal as half of points in your sensor reading would be passed through all of the checks even though they would not be used. So that is computationally expensive and could be optimized.