autonomousvision / carla_garage

[ICCV'23] Hidden Biases of End-to-End Driving Models
MIT License
203 stars 16 forks source link

I can't evaluate the pretained model. #31

Closed donymorph closed 5 months ago

donymorph commented 5 months ago

(carla15) (base) officepc@officepc-MS-7D90:~/Desktop/CARLA_0.9.15/carla_garage$ python leaderboard_evaluator_local.py --agent-config pretrained_models/lav/aim_02_05_withheld_0 --agent team_code/sensor_agent.py --routes /home/officepc/Desktop/CARLA_0.9.15/carla_garage/leaderboard/data/lav.xml --scenarios /home/officepc/Desktop/CARLA_0.9.15/carla_garage/leaderboard/data/scenarios/eval_scenarios.json leaderboard_evaluator_local.py:96: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if LooseVersion(dist.version) < LooseVersion('0.9.10'): Starting new route. Load and run scenarios.

========= Preparing RouteScenario_0 (repetition 0) =========

Setting up the agent Uncertainty weighting?: 1 Direct control prediction?: 1 Reduce target speed value by two m/s. Use stop sign controller: 0 pretrained_models/lav/aim_02_05_withheld_0/model_0030.pth 0it [00:00, ?it/s] Loading 0 lidars from 0 folders Total amount of routes: 0 Crashed routes: 0 Perfect routes: 0 0it [00:00, ?it/s] Loading 0 lidars from 0 folders Total amount of routes: 0 Crashed routes: 0 Perfect routes: 0 Loading the world

The scenario could not be loaded:

The CARLA server uses the wrong map!This scenario requires to use map Town02

Traceback (most recent call last): File "leaderboard_evaluator_local.py", line 339, in _load_and_run_scenario self._load_and_wait_for_world(args, config.town, config.ego_vehicles) File "leaderboard_evaluator_local.py", line 233, in _load_and_wait_for_world "This scenario requires to use map {}".format(town)) Exception: The CARLA server uses the wrong map!This scenario requires to use map Town02

Registering the route statistics

Kait0 commented 5 months ago

You seem to be using the wrong CARLA version CARLA_0.9.15. This repository needs CARLA 0.9.10.1 (the download link recently changed https://github.com/carla-simulator/carla/issues/7196)

donymorph commented 5 months ago

(carla99) officepc@officepc-MS-7D90:~/Desktop/carla_garage$ python leaderboard/leaderboard_evaluator_local.py --agent-config pretrained_models/lav/aim_02_05_withheld_0 --agent team_code/sensor_agent.py --routes leaderboard/data/lav.xml --scenarios leaderboard/data/scenarios/eval_scenarios.json

/home/officepc/Desktop/CARLA_0.9.10/PythonAPI/carla/dist/ccarla-0.9.10-py3.7-linux-x86_64.egg Traceback (most recent call last): File "leaderboard/leaderboard_evaluator_local.py", line 499, in main leaderboard_evaluator = LeaderboardEvaluator(arguments, statistics_manager) File "leaderboard/leaderboard_evaluator_local.py", line 100, in init dist = pkg_resources.get_distribution("carla") File "/home/officepc/miniconda3/envs/carla99/lib/python3.7/site-packages/pkg_resources/init.py", line 478, in get_distribution dist = get_provider(dist) File "/home/officepc/miniconda3/envs/carla99/lib/python3.7/site-packages/pkg_resources/init.py", line 354, in get_provider return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] File "/home/officepc/miniconda3/envs/carla99/lib/python3.7/site-packages/pkg_resources/init.py", line 909, in require needed = self.resolve(parse_requirements(requirements)) File "/home/officepc/miniconda3/envs/carla99/lib/python3.7/site-packages/pkg_resources/init.py", line 795, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'carla' distribution was not found and is required by the application Traceback (most recent call last): File "leaderboard/leaderboard_evaluator_local.py", line 509, in main() File "leaderboard/leaderboard_evaluator_local.py", line 505, in main del leaderboard_evaluator UnboundLocalError: local variable 'leaderboard_evaluator' referenced before assignment Exception ignored in: <function LeaderboardEvaluator.del at 0x7ff5bcc834d0> Traceback (most recent call last): File "leaderboard/leaderboard_evaluator_local.py", line 135, in del self._cleanup() File "leaderboard/leaderboard_evaluator_local.py", line 147, in _cleanup if self.manager and self.manager.get_running_status() \ AttributeError: 'LeaderboardEvaluator' object has no attribute 'manager'

""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" in leaderboard_evaluator_local.py file I added but still leaderboard evaluator is trying to find carla package in the conda environment.

carla_egg_path = os.path.expanduser('~/Desktop/CARLA_0.9.10/PythonAPI/carla/dist/ccarla-0.9.10-py3.7-linux-x86_64.egg') sys.path.append(carla_egg_path) print(carla_egg_path) import carla

Kait0 commented 5 months ago

you need to add the egg to the python path (not sure if os path will work). You can find an example here:

donymorph commented 5 months ago

it worked just deleted this part in the leaderboard_evaluator_local.py 100 dist = pkg_resources.get_distribution("carla") 101 if dist.version != 'leaderboard': 103 if LooseVersion(dist.version) < LooseVersion('0.9.10'): 104 raise ImportError("CARLA version 0.9.10.1 or newer required. CARLA version found: {}".format(dist))

donymorph commented 5 months ago

========= Preparing RouteScenario_14 (repetition 0) =========

Setting up the agent Uncertainty weighting?: 1 Direct control prediction?: 1 Reduce target speed value by two m/s. Use stop sign controller: 0 pretrained_models/lav/aim_02_05_withheld_0/model_0030.pth 0it [00:00, ?it/s] Loading 0 lidars from 0 folders Total amount of routes: 0 Crashed routes: 0 Perfect routes: 0 0it [00:00, ?it/s] Loading 0 lidars from 0 folders Total amount of routes: 0 Crashed routes: 0 Perfect routes: 0 Loading the world Running the route

Stopping the route, the agent has crashed:

An output representation was chosen that was not trained.

Traceback (most recent call last): File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/scenarios/scenario_manager_local.py", line 152, in _tick_scenario ego_action = self._agent() File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/autoagents/agent_wrapper_local.py", line 85, in call return self._agent(self.sensor_list_names) File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/autoagents/autonomous_agent.py", line 115, in call control = self.run_step(input_data, timestamp) File "/home/officepc/miniconda3/envs/carla99/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "team_code/sensor_agent.py", line 559, in run_step raise ValueError('An output representation was chosen that was not trained.') ValueError: An output representation was chosen that was not trained.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "leaderboard/leaderboard_evaluator_local.py", line 381, in _load_and_run_scenario self.manager.run_scenario() File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/scenarios/scenario_manager_local.py", line 136, in run_scenario self._tick_scenario(timestamp) File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/scenarios/scenario_manager_local.py", line 159, in _tick_scenario raise AgentError(e) leaderboard.autoagents.agent_wrapper_local.AgentError: An output representation was chosen that was not trained.

Stopping the route

========= Results of RouteScenario_14 (repetition 0) ------ FAILURE =========

╒═════════════════════════════════╤═════════════════════╕ │ Start Time │ 2024-03-21 22:51:02 │ ├─────────────────────────────────┼─────────────────────┤ │ End Time │ 2024-03-21 22:51:02 │ ├─────────────────────────────────┼─────────────────────┤ │ Duration (System Time) │ 0.09s │ ├─────────────────────────────────┼─────────────────────┤ │ Duration (Game Time) │ 0.1s │ ├─────────────────────────────────┼─────────────────────┤ │ Ratio (System Time / Game Time) │ 1.082 │ ╘═════════════════════════════════╧═════════════════════╛

╒═══════════════════════╤═════════╤═════════╕ │ Criterion │ Result │ Value │ ├───────────────────────┼─────────┼─────────┤ │ RouteCompletionTest │ FAILURE │ 0.0 % │ ├───────────────────────┼─────────┼─────────┤ │ OutsideRouteLanesTest │ SUCCESS │ 0 % │ ├───────────────────────┼─────────┼─────────┤ │ CollisionTest │ SUCCESS │ 0 times │ ├───────────────────────┼─────────┼─────────┤ │ RunningRedLightTest │ SUCCESS │ 0 times │ ├───────────────────────┼─────────┼─────────┤ │ RunningStopTest │ SUCCESS │ 0 times │ ├───────────────────────┼─────────┼─────────┤ │ InRouteTest │ SUCCESS │ │ ├───────────────────────┼─────────┼─────────┤ │ AgentBlockedTest │ SUCCESS │ │ ├───────────────────────┼─────────┼─────────┤ │ Timeout │ SUCCESS │ │ ╘═══════════════════════╧═════════╧═════════╛

Registering the route statistics Save state. Starting new route. Load and run scenarios.

========= Preparing RouteScenario_15 (repetition 0) =========

Setting up the agent Uncertainty weighting?: 1 Direct control prediction?: 1 Reduce target speed value by two m/s. Use stop sign controller: 0 pretrained_models/lav/aim_02_05_withheld_0/model_0030.pth 0it [00:00, ?it/s] Loading 0 lidars from 0 folders Total amount of routes: 0 Crashed routes: 0 Perfect routes: 0 0it [00:00, ?it/s] Loading 0 lidars from 0 folders Total amount of routes: 0 Crashed routes: 0 Perfect routes: 0 Loading the world Running the route

Stopping the route, the agent has crashed:

An output representation was chosen that was not trained.

Traceback (most recent call last): File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/scenarios/scenario_manager_local.py", line 152, in _tick_scenario ego_action = self._agent() File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/autoagents/agent_wrapper_local.py", line 85, in call return self._agent(self.sensor_list_names) File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/autoagents/autonomous_agent.py", line 115, in call control = self.run_step(input_data, timestamp) File "/home/officepc/miniconda3/envs/carla99/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "team_code/sensor_agent.py", line 559, in run_step raise ValueError('An output representation was chosen that was not trained.') ValueError: An output representation was chosen that was not trained.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "leaderboard/leaderboard_evaluator_local.py", line 381, in _load_and_run_scenario self.manager.run_scenario() File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/scenarios/scenario_manager_local.py", line 136, in run_scenario self._tick_scenario(timestamp) File "/home/officepc/Desktop/carla_garage/leaderboard/leaderboard/scenarios/scenario_manager_local.py", line 159, in _tick_scenario raise AgentError(e) leaderboard.autoagents.agent_wrapper_local.AgentError: An output representation was chosen that was not trained.

Stopping the route

========= Results of RouteScenario_15 (repetition 0) ------ FAILURE =========

╒═════════════════════════════════╤═════════════════════╕ │ Start Time │ 2024-03-21 22:51:08 │ ├─────────────────────────────────┼─────────────────────┤ │ End Time │ 2024-03-21 22:51:08 │ ├─────────────────────────────────┼─────────────────────┤ │ Duration (System Time) │ 0.1s │ ├─────────────────────────────────┼─────────────────────┤ │ Duration (Game Time) │ 0.1s │ ├─────────────────────────────────┼─────────────────────┤ │ Ratio (System Time / Game Time) │ 0.997 │ ╘═════════════════════════════════╧═════════════════════╛

╒═══════════════════════╤═════════╤═════════╕ │ Criterion │ Result │ Value │ ├───────────────────────┼─────────┼─────────┤ │ RouteCompletionTest │ FAILURE │ 0.0 % │ ├───────────────────────┼─────────┼─────────┤ │ OutsideRouteLanesTest │ SUCCESS │ 0 % │ ├───────────────────────┼─────────┼─────────┤ │ CollisionTest │ SUCCESS │ 0 times │ ├───────────────────────┼─────────┼─────────┤ │ RunningRedLightTest │ SUCCESS │ 0 times │ ├───────────────────────┼─────────┼─────────┤ │ RunningStopTest │ SUCCESS │ 0 times │ ├───────────────────────┼─────────┼─────────┤ │ InRouteTest │ SUCCESS │ │ ├───────────────────────┼─────────┼─────────┤ │ AgentBlockedTest │ SUCCESS │ │ ├───────────────────────┼─────────┼─────────┤ │ Timeout │ SUCCESS │ │ ╘═══════════════════════╧═════════╧═════════╛

Registering the route statistics Save state. Registering the global statistics """""""""""""""""" now I am having another issue, RouteCompletionTest FAILURE

Kait0 commented 5 months ago

Depending on which model you are trying to evaluate you might need to set export DIRECT=0 additionally in the script (for models with the target speed classification).

donymorph commented 5 months ago

Screenshot from 2024-03-22 04-57-00 Thank you. somehow I managed to run the pretened models on leaderboard version 2. by changing
direct = os.environ.get('DIRECT', 1) to direct = os.environ.get('DIRECT', 0)