Closed pzhabc closed 10 months ago
Hi,
You would need to describe in more detail what you mean by not being able to do navigation tasks. Did the robot not move or did the policy not bring the robot to the goal?
How did you change the robots and what sensors did you use to train and test?
Both are differential drive robots so the policy should be transferable and the simulation robot does not need to necessarily be the same as real world robot. But sensors and their locations must align if you are using a velodyne sensor.
Hi,
You would need to describe in more detail what you mean by not being able to do navigation tasks. Did the robot not move or did the policy not bring the robot to the goal?
How did you change the robots and what sensors did you use to train and test?
Both are differential drive robots so the policy should be transferable and the simulation robot does not need to necessarily be the same as real world robot. But sensors and their locations must align if you are using a velodyne sensor.
It's the robots that can't get to the target point, 100% collision, whereas the pioneer3dx won't. turtlebot3 robot model was shared by others on GitHub, "https://github.com/ROBOTIS-GIT/turtlebot3/blob/master/turtlebot3_description/urdf/turtlebot3_burger.gazebo.xacro", The training stage is still the pioneer3dx model in your project, which has not changed. Is it because of different Lidar models?
Where is the velodyne sensor placed on your turtlebot robot?
You can read about changing the robot model here: https://medium.com/@reinis_86651/using-turtlebot-in-deep-reinforcement-learning-4749946e1c15
Specifically, note the sensor parameters that change in the env file. The height at which the velodyne sensor is located is important since we filter out the floor.
Where is the velodyne sensor placed on your turtlebot robot?
You can read about changing the robot model here: https://medium.com/@reinis_86651/using-turtlebot-in-deep-reinforcement-learning-4749946e1c15
Specifically, note the sensor parameters that change in the env file. The height at which the velodyne sensor is located is important since we filter out the floor.
I followed your changes, but they didn't seem to work. Take a look at the videos below, which use the same network that has already been trained.
Have you looked in the already closed issues? This might help you: https://github.com/reiniscimurs/DRL-robot-navigation/issues/60
Have you looked in the already closed issues? This might help you: #60
Thanks, I saw the comment and the issue has been resolved. It seems that only velodyne 3D radar was used in this project. Is it possible to use only 2D radar? And if I don't have 3D radar in the real world, can my 3D radar-trained network be deployed on 2D radar in real robots? Also, what does it mean to filter out the floor in the code?
It seems that only velodyne 3D radar was used in this project
Yes
Is it possible to use only 2D radar?
Radar is not lidar. But, yes.
And if I don't have 3D radar in the real world, can my 3D radar-trained network be deployed on 2D radar in real robots?
Yes, as shown in the paper.
Also, what does it mean to filter out the floor in the code?
Please read the provided tutorial. It is described there.
OK, thanks. I have read your tutorial, but I still haven't figured it out. In the Pioneer3dx robot, is the height of Velodyne from the ground about 0.2m? But I still don't understand why we filter out the ground, I'm sorry.
Think of how we generate the values for the laser state. We bin the data and then take the minimum value for each bin to represent the value for that angle direction. This should represent the closest point to the obstacle in that direction. However, the velodyne laser also detects the ground and most of the time the ground will always be closer than any obstacle. So your laser state will not represent how far are the obstacles, but rather, where is the floor. So we need to filter out the floor to actually get any information from the obstacles.
Think of how we generate the values for the laser state. We bin the data and then take the minimum value for each bin to represent the value for that angle direction. This should represent the closest point to the obstacle in that direction. However, the velodyne laser also detects the ground and most of the time the ground will always be closer than any obstacle. So your laser state will not represent how far are the obstacles, but rather, where is the floor. So we need to filter out the floor to actually get any information from the obstacles.
OK, thank you, I understand.
Hello, I used the pioneer3dx robot during the training phase, but I used the turtlebot3 robot during the test phase, it couldn't do navigation tasks, but the pioneer3dx robot could. Is it because the model of the robot being trained and tested is inconsistent? Is it because of differences in their physical models? Must the trained and tested models be consistent? Does the simulation have to be the same as the real world robot?