reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
571 stars 119 forks source link

Utilize RGB or depth sensor data #31

Closed mincheulkim closed 2 years ago

mincheulkim commented 2 years ago

Hi, thanks to your great work.

I want to re-implement your previous work with Depth camera information(https://www.mdpi.com/2079-9292/9/3/411) from this repo.

I checked that the valid rgb output from the vision camera mounted on the robot in gazebo and Rviz. But how do I get transfrom gazebo topic to ROS topic? Can I declare a new subscriber for the vision data like other subscribers declared in velodyne_env.py?

(this topic is related with #14 by Barry2333)

thanks,

reiniscimurs commented 2 years ago

Hi,

I am not sure what you mean by "get transfrom gazebo topic to ROS topic"? Can you explain that?

Yes, you should be able to add the depth camera to the robot. I think it is already attached, but not sure if it is publishing anything. Then you would need to create a subscriber in velodyne_env.py file just like any other subscriber and use the data as input into the network.

That paper was based on tensorflow base though, and I do not have that code anymore, so you would have to recreate that from the paper.