I spent some time trying to integrate the scenario_runner into the multi-processing RL training but it didn't work out smoothly.
And What I want to try is use your roach expert to collect data from .XML route and .json scenarios based on official code from carla on leaderboard and scenarios to see the leaderboard result on your roach. Since as said here:
more naturally than hand-crafted CARLA experts
and based on the official leaderboard and scenarios I can compare the result from the same route and scenarios but not random as this repo did.
But when I read code based on readme collect: https://github.com/zhejz/carla-roach#quick-start-collect-an-expert-dataset-using-roach
It seems that your agent file didn't suitable to run it on leaderboard, like the file:
class RlBirdviewAgent():
it didn't inherit from autonomous_agent.AutonomousAgent which leaderboard requires, and also get_entry_point etcs.
Relate issue: https://github.com/zhejz/carla-roach/issues/8
From here: https://github.com/zhejz/carla-roach/issues/8#issuecomment-992429088
And What I want to try is use your roach expert to collect data from .XML route and .json scenarios based on official code from carla on leaderboard and scenarios to see the leaderboard result on your
roach
. Since as said here:and based on the official leaderboard and scenarios I can compare the result from the same route and scenarios but not random as this repo did.
But when I read code based on readme collect: https://github.com/zhejz/carla-roach#quick-start-collect-an-expert-dataset-using-roach It seems that your agent file didn't suitable to run it on leaderboard, like the file:
Did anyone try this on the offline official leaderboard and self-defined XML and JSON?