reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
571 stars 119 forks source link

Car model modification issue #93

Closed nlnlnl1 closed 6 months ago

nlnlnl1 commented 9 months ago

Hello, first of all, thank you very much for sharing. Because I tried to use the image information in the camera as state input, but I found that the front of the car in the image covered most of the field of view information. 1 So I want to solve this problem by changing the camera position in the model. But when I changed the values marked in red in the following file, the camera position did indeed increase, but my car navigation also had problems. The previously trained model is constantly colliding now. 微信图片_20231221170332 I don't know if any changes need to be made in other configuration files or if I need to make some changes in the code. Looking forward to your reply.

reiniscimurs commented 9 months ago

Just to make it clear. You loaded the pre-trained weights with the new configuration and ran it with the test_velodyne_td3.py?

nlnlnl1 commented 9 months ago

Thank you very much for your reply. I'm very sorry for my mistake. When I changed the weights and tested again, everything now looks very good. 微信图片_20231221170332 But I still want to confirm that when my image changes to the image shown in the above picture, do I not need to make any other changes in the code or other configuration files? 微信图片_20231221170332 I just changed the xyz parameter of the first camera in this file. I don't know if there will be any other problems in the future.

reiniscimurs commented 9 months ago

The camera images are not used in training in any way. Camera is only for visual purposes. Slight changes to camera position should not affect training as long as the camera sensor is not blocking the velodyne lidar sensor.

nlnlnl1 commented 9 months ago

Thank you very much for your answer. I am now trying to use some methods to reduce the dimensionality of image information as input for reinforcement learning models. I am currently trying to use spatial self attention mechanism to extract features from images. Do you have any other suggestions regarding this?

reiniscimurs commented 9 months ago

You could use atrous convolution or similar methods to reduce the image dimensionality quite quickly. Previously I have used depth-wise separable convolution as well (https://www.mdpi.com/2079-9292/9/3/411).

However, you should think about your application. If you are training your model in simulation and training on image data, be sure that the deployed model does not work with out of distribution data. Image data is not as easily transferable between domains as laser data. Even depth images have pretty significant sim to real gap. So RGB images will be difficult to employ anywhere else besides your training setup.

nlnlnl1 commented 9 months ago

Thank you very much for your reply.