reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
571 stars 119 forks source link

test the training results #84

Closed peipei518 closed 10 months ago

peipei518 commented 10 months ago

Hello, I downloaded the Melodic branch of your project, but I couldn't find the code file for testing the training results in it. If I use the Noetic version of the testing code, I will report an error. How can I test the training results

peipei518 commented 10 months ago

I use the Noetic version of the testing code,The terminal will output the following error

/home/autolabor/.local/lib/python3.6/site-packages/torch/cuda/init.py:80: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 9010). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:112.) return torch._C._cuda_getDeviceCount() > 0 Traceback (most recent call last): File "test_velodyne_td3.py", line 55, in env = GazeboEnv("multi_robot_scenario.launch", environment_dim) TypeError: init() missing 2 required positional arguments: 'width' and 'nchannels'

reiniscimurs commented 10 months ago

There is a discrepancy between how the GazeboEnv is initialized in noetic and melodic. This is purely a simple python issue, you are missing input values for the class instantiation. AFAIK you could put anything there as they are not used in the code anyway, which is why it was removed in the noetic version.

peipei518 commented 10 months ago

Thank you very much. With your suggestion, I have solved the problem very well