reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
486 stars 97 forks source link

Questions about environment codes #146

Closed namjiwon1023 closed 4 weeks ago

namjiwon1023 commented 4 weeks ago

Dear @reiniscimurs

In the Melodic branch, Step function, one step publishes two times actions.

https://github.com/reiniscimurs/DRL-robot-navigation/blob/9a4d92ca41e370617baa4451f0a784f8ca5239ce/TD3/velodyne_env.py#L180

https://github.com/reiniscimurs/DRL-robot-navigation/blob/9a4d92ca41e370617baa4451f0a784f8ca5239ce/TD3/velodyne_env.py#L254

Is there any benefit in doing this? But this has been cancelled in Noetic.

Thanks you !

reiniscimurs commented 4 weeks ago

Hi,

This is just a mistake and the second publishing command does not actually get used. Since it is a Melodic branch, I have not updated it but it is not present in Noetic branch

namjiwon1023 commented 4 weeks ago

Dear @reiniscimurs

That means that in the Melodic branch, the second self.vel_pub.publish(vel_cmd) (line 254) code is not actually used, right? Even if this environment is used, the results obtained are correct. its right?

Thanks you !

reiniscimurs commented 4 weeks ago

Yes, that is right. The second call will have no effect

namjiwon1023 commented 4 weeks ago

Thank you for your prompt reply!

Thanks again for your contribution!