Closed AliBaheri closed 5 years ago
Yes, i think you are right. This was basically the percentage of succesful episodes considering 25 episodes each. Of course the tests were performed after training.
The experiment is successful if : The agent reachs the goal in less time than a certain time limit. The agent starts on a certain point A and is directed to reach a certain point B. For this benchmark, a collision is not considered as a reason for failure, the agent can collide and still succeed.
Thanks for your clarification.
I am still a bit confused about the results presented in legacy paper Table 1.
My question is which code has led to those results? driving_benchmark_example --corl2017
or something else?
Thanks.
The codes are on other CARLA organization repositories, we have the reinforcement learning and the imitation learning repos. All the discussions with respect to benchmarks are moved to a new repo. https://github.com/carla-simulator/driving-benchmarks
I have two questions about the results presented in CARLA legacy paper, Table 1.
1) I assume that for those different tasks (
straight, turn, navigation, and dynamic navigation
) you have run 100 tests after training and reported the number of successful results. Could you please confirm my understanding? Obviously this is not the case for MP- my question refers to just IL and RL case.2) How do you conclude that a certain experiment is successful? Obviously, we can visualize it, but is there any part in code that give us this conclusion?
Thanks,