reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
571 stars 119 forks source link

rviz and gazebo load an empy world environment #151

Open answsdu opened 3 months ago

answsdu commented 3 months ago

Hi Reinis, so sorry to bother you,

Describe the bug

when I run "python3 test_velodyne_td3.py",it turn to the following pictures,it seems like rviz and gazebo load an empty world environment. EC7A9702A45AFBD0BD302395A6CC8CE2 D2061BF67705691F24749F0E0FDE4311 And I have check my set of GAZEBO_RESOURCE_PATH following comment#16,I think it's correct,but the program still can't work well. 3E958038500D909D3A97B745AF96E902

the terminal send out the following error: 4CD6A76A0B366A0C1C5B7347EEFF2CD0

Desktop (please complete the following information):

Could u pls help me solve this problem?Thank u very much!

answsdu commented 3 months ago

Hi Reinis, the program works well after I rebuild the repo.But there are a new problem now,the model training is particularly slow,it only do 40 Epoch for 10 hours!Could u please tell me what can I do to improve the speed of training?Thank u very much! 2024-07-02 22-22-38 的屏幕截图

reiniscimurs commented 3 months ago

There isn't much you can do to increase training speed besides what is mentioned in the tutorial. However, here it seems your model is probably not training properly, and you might want to train again with a different seed. See issues labeled with convergence and see if you notice any similarities.

answsdu commented 3 months ago

Hi,Reinis I don't make any change to the "train_velodne_td3.py" and "velodyne_env.py" two files. After 89 epochs,the training seemed to work a little better like the following picture: 2024-07-03 09-27-45 的屏幕截图 Thanks for ur answer and patience! I'm just starting to learn about DRL.And I will see the convergence .

reiniscimurs commented 3 months ago

The training is somewhat random so even without changes it may fail to converge. Though it does look like the values are improving, the performance should also be improving.

innocence-cloud commented 2 months ago

Hello, I had the same problem, rviz loaded an empty world environment, and the terminal error message was the same. I would like to ask you how to rebuild the repo. How to solve this problem. I will be very grateful.

reiniscimurs commented 2 months ago

@innocence-cloud To rebuild the repo simply delete the folder with it and clone it again. Most likely you are not setting up the sources correctly so make sure to follow the tutorial and pay attention to the paths to the repo.

Seher-789 commented 2 months ago

@answsdu hi, i want to know how did you solve the empty world problem?