autonomousvision / carla_garage

[ICCV'23] Hidden Biases of End-to-End Driving Models
MIT License
203 stars 16 forks source link

LAV benchmark #22

Closed dvdhfnr closed 7 months ago

dvdhfnr commented 7 months ago

Hi! Thanks for your great work!

I would have one question for clarification w.r.t. the "LAV" benchmark. In their original work they seem to be using the default scenario file provided by the CARLA Leaderboard 1.0. When interpreting your code/docs correctly, you are using the scenario file from the Transfuser paper, and only the routes from the LAV paper:

The --scenarios option should be set to eval_scenarios.json for both benchmarks.

Is this correct? Are there any reasons for doing so?

The traffic density seems to be the same as in the original LAV paper:https://github.com/autonomousvision/carla_garage/blob/fb4b180332e465c53095ad6de29429442664882c/leaderboard/leaderboard/scenarios/route_scenario_local.py#L454

Are there any other differences?

Kait0 commented 7 months ago

I don't think there is a "default" scenario file from the CARLA leaderboard 1.0. The CARLA team released many different scenario files in various branches/repositories (scenario runner / leaderboard) that were also changed over time. The scenario file used by LAV for evaluation only includes scenarios 1,3 and 4. We changed the scenario file to the one used in the TransFuser repository/longest6 because it additionally includes scenarios of the types 7,8,9 and 10. See also this discussion.

There are no other differences, routes and traffic densities are the same.