Open zyw0319 opened 4 months ago
I retested the following and found that when eval_freq=5000, it takes 30 minutes for the terminal to print the message as shown in Figure 3. Is this normal? 图三
Hi, yes that makes sense. The message is printed only during evaluation at the end of each epoch. Epoch runs for about 5000 steps. Each step takes 0.1 seconds so each epoch by default should take about 5000 * 0.1 = 500 seconds or a bit more than 8 minutes. Add some training time and that means each epoch will run for about 10 minutes. 30 minutes is quite a long time and you should check if your ROS simulation can run in real time. Other than that it seems like everything is performing normally.
Thank you very much for your reply. I have two questions at present. First: Before I execute the algorithm python3 test_velodyne_td3.py, I first activated ROS and gazebo environment. Will activating gazebo environment affect the training speed? Second: I saw that your article training took 8 hours. How can I get the same training time as you? I hope you can give me some suggestions.Thank you again.
Thank you very much for your reply, which helped me solve the problems I encountered recently. I still have some questions about training. Please help me answer them. First point: I have modified 1000 to 2000 of the TD3 file according to your tutorial. I have trained for 15 hours, a total of 66 epochs. Is this speed reasonable? As shown in Figure 1 Second: Did you use GPU for training? My GPU occupancy rate is shown in Figure 2. Is it reasonable? If you call GPU, can you give me some information? Thank you again. 图一 图二
Thank you very much for your patient guidance. I have a few questions that I would like to ask you again.
First, I read your paper and code and found that the termination condition of the paper training is 800 epochs, and the code is max_timesteps = 5e6. Which condition is used?
Second, Figures 1, 2, and 3 are the visualization results of my current tensorboard. Why does the loss function keep rising? Is this normal?
Hello, I have successfully tested your code for 100 times, and the success rate of reaching the target point is about 87%. I have two questions to ask you.
I have modified the source code according to your suggestion, as shown in Figures 1 and 2 below. I trained the agent for 328 epochs, tested random target points and starting points, and fixed starting points and test points respectively, and found that the random success rate was 84% and the fixed success rate was 0% (as shown in Figure 3). Could you give me some advice?
Likely your distance to the goal is outside the trained range. meaning, when you train the model the max value of distance to the goal you will see is around 10 meters. Here the distance is something like 12.5 which is a value the model has never seen and trained on.
Dear Mr. REINISCIMURS, first of all, thank you very much for your code. I am going to add an experience pool model on the basis of your code, but I currently encounter a problem with a dimension. Intersection At present, the reason for my analysis is that the starting point and the target point during the training are inconsistent every round, which causes the torch.size dimension to be inconsistent. Thank you again
At present, the reason for my analysis is that the starting point and the target point during the training are inconsistent every round, which causes the torch.size dimension to be inconsistent
Hi. The start and goal positions are random for each episode by design and is the intended behavior. However, it should not change the input size in any way. Your issue stems from having a different state representation that the default one in this repo. That means, your input vector is actually 36 values instead of expected 24. That means there is some change in the code. Please provide code changes to actually see what the issue is.
Also note that for new issues, better to open a new issue and fill in the issue template. Without information that is asked there, it is really difficult to help and answer questions.
Dear Reinis Cimurs, I recently read your essay titled "Goal-Driven Autonomous Exploration Through Deep Reinforcement Learning",I think your paper is fantastic and having watched your videos on youtube, I can't wait to implement it. I have a problem. After I run the python3 test_velodyne_td3.py code, I find that the agent in gazebo can run normally, but the terminal cannot print the message "Average Reward over %i Evaluation Episodes, Epoch %i: %f, %f" as shown in Figure 1. When I change the parameter eval_freq = 5e3 to 500, I find that the message can be printed normally as shown in Figure 2. Can you give me some suggestions? Thank you again. 图一 图二