zhejz / carla-roach

Roach: End-to-End Urban Driving by Imitating a Reinforcement Learning Coach. ICCV 2021.
https://zhejz.github.io/roach
Other
267 stars 47 forks source link

Test roach through official leaderboard and self-defined xml route and json scenarios #10

Closed Kin-Zhang closed 2 years ago

Kin-Zhang commented 2 years ago

Relate issue: https://github.com/zhejz/carla-roach/issues/8

From here: https://github.com/zhejz/carla-roach/issues/8#issuecomment-992429088

I spent some time trying to integrate the scenario_runner into the multi-processing RL training but it didn't work out smoothly.

And What I want to try is use your roach expert to collect data from .XML route and .json scenarios based on official code from carla on leaderboard and scenarios to see the leaderboard result on your roach. Since as said here:

more naturally than hand-crafted CARLA experts

and based on the official leaderboard and scenarios I can compare the result from the same route and scenarios but not random as this repo did.
But when I read code based on readme collect: https://github.com/zhejz/carla-roach#quick-start-collect-an-expert-dataset-using-roach It seems that your agent file didn't suitable to run it on leaderboard, like the file:

class RlBirdviewAgent():
  1. it didn't inherit from autonomous_agent.AutonomousAgent which leaderboard requires, and also get_entry_point etcs.
  2. I have no idea how to start to try it on leaderboard follow your readme roach's codes and Carla official website: https://leaderboard.carla.org/get_started/#3-creating-your-own-autonomous-agent

Did anyone try this on the offline official leaderboard and self-defined XML and JSON?

Kin-Zhang commented 2 years ago

Found someone did here: https://github.com/Kait0/carla-roach

related commit: https://github.com/Kait0/carla-roach/commit/4e1a961dc9f3bf19b01b5b53e60d76f1081037e7