reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
486 stars 97 forks source link

Some problems #100

Closed pzhabc closed 5 months ago

pzhabc commented 5 months ago

Hello, I used the pioneer3dx robot during the training phase, but I used the turtlebot3 robot during the test phase, it couldn't do navigation tasks, but the pioneer3dx robot could. Is it because the model of the robot being trained and tested is inconsistent? Is it because of differences in their physical models? Must the trained and tested models be consistent? Does the simulation have to be the same as the real world robot?

reiniscimurs commented 5 months ago

Hi,

You would need to describe in more detail what you mean by not being able to do navigation tasks. Did the robot not move or did the policy not bring the robot to the goal?

How did you change the robots and what sensors did you use to train and test?

Both are differential drive robots so the policy should be transferable and the simulation robot does not need to necessarily be the same as real world robot. But sensors and their locations must align if you are using a velodyne sensor.

pzhabc commented 5 months ago

Hi,

You would need to describe in more detail what you mean by not being able to do navigation tasks. Did the robot not move or did the policy not bring the robot to the goal?

How did you change the robots and what sensors did you use to train and test?

Both are differential drive robots so the policy should be transferable and the simulation robot does not need to necessarily be the same as real world robot. But sensors and their locations must align if you are using a velodyne sensor.

It's the robots that can't get to the target point, 100% collision, whereas the pioneer3dx won't. turtlebot3 robot model was shared by others on GitHub, "https://github.com/ROBOTIS-GIT/turtlebot3/blob/master/turtlebot3_description/urdf/turtlebot3_burger.gazebo.xacro", The training stage is still the pioneer3dx model in your project, which has not changed. Is it because of different Lidar models?

reiniscimurs commented 5 months ago

Where is the velodyne sensor placed on your turtlebot robot?

You can read about changing the robot model here: https://medium.com/@reinis_86651/using-turtlebot-in-deep-reinforcement-learning-4749946e1c15

Specifically, note the sensor parameters that change in the env file. The height at which the velodyne sensor is located is important since we filter out the floor.

pzhabc commented 5 months ago

Where is the velodyne sensor placed on your turtlebot robot?

You can read about changing the robot model here: https://medium.com/@reinis_86651/using-turtlebot-in-deep-reinforcement-learning-4749946e1c15

Specifically, note the sensor parameters that change in the env file. The height at which the velodyne sensor is located is important since we filter out the floor.

I followed your changes, but they didn't seem to work. Take a look at the videos below, which use the same network that has already been trained.

https://github.com/reiniscimurs/DRL-robot-navigation/assets/147294035/81b89d4b-43fc-46e4-b6ab-a802289803c2

https://github.com/reiniscimurs/DRL-robot-navigation/assets/147294035/6261b909-e3c5-438e-857b-f6262d4b9650

reiniscimurs commented 5 months ago

Have you looked in the already closed issues? This might help you: https://github.com/reiniscimurs/DRL-robot-navigation/issues/60

pzhabc commented 5 months ago

Have you looked in the already closed issues? This might help you: #60

Thanks, I saw the comment and the issue has been resolved. It seems that only velodyne 3D radar was used in this project. Is it possible to use only 2D radar? And if I don't have 3D radar in the real world, can my 3D radar-trained network be deployed on 2D radar in real robots? Also, what does it mean to filter out the floor in the code?

reiniscimurs commented 5 months ago

It seems that only velodyne 3D radar was used in this project

Yes

Is it possible to use only 2D radar?

Radar is not lidar. But, yes.

And if I don't have 3D radar in the real world, can my 3D radar-trained network be deployed on 2D radar in real robots?

Yes, as shown in the paper.

Also, what does it mean to filter out the floor in the code?

Please read the provided tutorial. It is described there.

pzhabc commented 5 months ago

OK, thanks. I have read your tutorial, but I still haven't figured it out. In the Pioneer3dx robot, is the height of Velodyne from the ground about 0.2m? But I still don't understand why we filter out the ground, I'm sorry.

reiniscimurs commented 5 months ago

Think of how we generate the values for the laser state. We bin the data and then take the minimum value for each bin to represent the value for that angle direction. This should represent the closest point to the obstacle in that direction. However, the velodyne laser also detects the ground and most of the time the ground will always be closer than any obstacle. So your laser state will not represent how far are the obstacles, but rather, where is the floor. So we need to filter out the floor to actually get any information from the obstacles.

pzhabc commented 5 months ago

Think of how we generate the values for the laser state. We bin the data and then take the minimum value for each bin to represent the value for that angle direction. This should represent the closest point to the obstacle in that direction. However, the velodyne laser also detects the ground and most of the time the ground will always be closer than any obstacle. So your laser state will not represent how far are the obstacles, but rather, where is the floor. So we need to filter out the floor to actually get any information from the obstacles.

OK, thank you, I understand.