reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
488 stars 98 forks source link

Some question #43

Closed chih7715 closed 1 year ago

chih7715 commented 1 year ago

Excuse me After I have trained the model, do I need to save the file at the end? I'm a ros beginner, if I want to apply it to the real device and environment, how can I do it, I have pioneer 3dx and UST-10LX - Hokuyo.

reiniscimurs commented 1 year ago

Hi,

The model weights are saved automatically by setting the following flag: https://github.com/reiniscimurs/DRL-robot-navigation/blob/125da3d1f788d8c5f60aa972f36dab6f333348e1/TD3/train_velodyne_td3.py#L239

By default, it is already set to True. You should find trained model weights in pytorch_models folder. You can test it by running the test_velodyne_td3.py script.

You can train a model and connect it to a real robot through ROS commands, but instead of calling the robot in simulation, call the robot you have created an interface with. Simply publish the velocity commands to whatever topic your real robot is subscribed to. You can see an example of this in other repo: https://github.com/reiniscimurs/GDAE

chih7715 commented 1 year ago

Thank you very much for your reply,i will try,thank you for having released this cool project!