Closed nlnlnl1 closed 6 months ago
Just to make it clear. You loaded the pre-trained weights with the new configuration and ran it with the test_velodyne_td3.py?
Thank you very much for your reply. I'm very sorry for my mistake. When I changed the weights and tested again, everything now looks very good. But I still want to confirm that when my image changes to the image shown in the above picture, do I not need to make any other changes in the code or other configuration files? I just changed the xyz parameter of the first camera in this file. I don't know if there will be any other problems in the future.
The camera images are not used in training in any way. Camera is only for visual purposes. Slight changes to camera position should not affect training as long as the camera sensor is not blocking the velodyne lidar sensor.
Thank you very much for your answer. I am now trying to use some methods to reduce the dimensionality of image information as input for reinforcement learning models. I am currently trying to use spatial self attention mechanism to extract features from images. Do you have any other suggestions regarding this?
You could use atrous convolution or similar methods to reduce the image dimensionality quite quickly. Previously I have used depth-wise separable convolution as well (https://www.mdpi.com/2079-9292/9/3/411).
However, you should think about your application. If you are training your model in simulation and training on image data, be sure that the deployed model does not work with out of distribution data. Image data is not as easily transferable between domains as laser data. Even depth images have pretty significant sim to real gap. So RGB images will be difficult to employ anywhere else besides your training setup.
Thank you very much for your reply.
Hello, first of all, thank you very much for sharing. Because I tried to use the image information in the camera as state input, but I found that the front of the car in the image covered most of the field of view information. So I want to solve this problem by changing the camera position in the model. But when I changed the values marked in red in the following file, the camera position did indeed increase, but my car navigation also had problems. The previously trained model is constantly colliding now. I don't know if any changes need to be made in other configuration files or if I need to make some changes in the code. Looking forward to your reply.