reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
487 stars 97 forks source link

The problem of collision and resetting #81

Closed hzxBuaa closed 3 months ago

hzxBuaa commented 8 months ago

Hello, reiniscimurs. Could you help me answer the following questions? If you would answer this question, I would really appreciate it.

Will information be sent when there is a collision in Gazebo? I changed the shape of the vehicle to a rectangular parallelepiped shape. This will cause a problem. If a minimum distance collision threshold is set, how to select this minimum distance? Because if the vehicle is not round but square, the distance between the side and the front and the obstacle is different. Now my requirement is to reset it upon collision. If the minimum distance is selected for collision according to the front, more space will be sacrificed on the side. therefore. I'd like to ask if you have a better way? For example, what information will be transmitted by collision in Gazebo, so we can reset it directly without considering the front and side issues.

reiniscimurs commented 7 months ago

Hi,

Are you planning for this to work only in simulation or also in real life?

In simulation I can think of 2 ways to do this.

  1. You can try to make ROS bumper sensor work. then you could place this bumper sensor on your robot and it would trigger when it collides with something. However, it is notoriously difficult to make it work in ROS and in my opinion is essentially an unusable sensor as the ROS documentation is missing a lot of information about how to actually implement it. Some information here: https://answers.ros.org/question/246448/getting-contact-sensorbumper-gazebo-plugin-to-work/
  2. You can just use laser sensor values and write your own collision function. Currently, we only care for minimum value of the whole range of laser data, but you could just divide the laser sensor into sections and each section can have a differing collision triggering value. For instance, laser values that are on the sides could have a closer collision triggering value that values representing the front of the robot. This should be fairly easy to implement.