motional / nuplan-devkit

The devkit of the nuPlan dataset.
https://www.nuplan.org
Other
682 stars 131 forks source link

Running closed loop simulation with all agents as ML policy #359

Open HenryDykhne opened 10 months ago

HenryDykhne commented 10 months ago

Hello,

I am trying to find where the non-EGO tracks get assigned their controllers for the closed-loop reactive simulation so that I may change it from running using the IDM controller to the ML controller.

Additionally, is the ML controller already pre-trained? Or does a weights file exist for it? Or is it expected for each individual to train the controller themselves on the NuPlan dataset for comparison?

Finally, I would also be interested in knowing if there exists a comprehensive list of config options for the simulation and training hydra configs, since so far I can only find the ones in the tutorial.

At the moment, I am trying to run the simulation with the following config, using the model trained in the tutorial nuplan_framework.ipynb:

last_experiment = sorted(os.listdir(LOG_DIR))[-1]
train_experiment_dir = sorted(LOG_DIR.iterdir())[-1]
checkpoint = sorted((train_experiment_dir / 'checkpoints').iterdir())[-1]

MODEL_PATH = str(checkpoint).replace("=", "\=")

# Location of path with all simulation configs
CONFIG_PATH = '../nuplan/planning/script/config/simulation'
CONFIG_NAME = 'default_simulation'

# Select the planner and simulation challenge
PLANNER = 'ml_planner'  # [simple_planner, ml_planner]
OBSERVATION = 'ego_centric_ml_agents_observation'#'idm_agents_observation'
CHALLENGE = 'closed_loop_reactive_agents'  # [open_loop_boxes, closed_loop_nonreactive_agents, closed_loop_reactive_agents]
DATASET_PARAMS = [
    'scenario_builder=nuplan_mini',  # use nuplan mini database
    'scenario_filter=all_scenarios',  # initially select all scenarios in the database
    'scenario_filter.scenario_types=[near_multiple_vehicles, on_pickup_dropoff, starting_unprotected_cross_turn, high_magnitude_jerk]',  # select scenario types
    'scenario_filter.num_scenarios_per_type=3',  # use 3 scenarios per scenario type
]

# Name of the experiment
EXPERIMENT = 'simulation_simple_experiment'

# Initialize configuration management system
hydra.core.global_hydra.GlobalHydra.instance().clear()  # reinitialize hydra if already initialized
hydra.initialize(config_path=CONFIG_PATH)

# Compose the configuration
cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[
    f'experiment_name={EXPERIMENT}',
    f'group={SAVE_DIR}',    
    f'planner={PLANNER}',
    f'model=raster_model',
    'planner.ml_planner.model_config=${model}',  # hydra notation to select model config
    f'planner.ml_planner.checkpoint_path={MODEL_PATH}',  # this path can be replaced by the checkpoint of the model trained in the previous section
    f'observation={OBSERVATION}',
    'observation.model_config=${model}',
    f'observation.checkpoint_path={MODEL_PATH}',
    f'+simulation={CHALLENGE}',
    *DATASET_PARAMS,
])

Uppon running, I am greeted with this error repeating for every scenario:

(wrapped_fn pid=43068) Traceback (most recent call last):
(wrapped_fn pid=43068)   File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/executor.py", line 27, in run_simulation
(wrapped_fn pid=43068)     return sim_runner.run()
(wrapped_fn pid=43068)   File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/simulations_runner.py", line 117, in run
(wrapped_fn pid=43068)     self.simulation.propagate(trajectory)
(wrapped_fn pid=43068)   File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/simulation.py", line 172, in propagate
(wrapped_fn pid=43068)     self._observations.update_observation(iteration, next_iteration, self._history_buffer)
(wrapped_fn pid=43068)   File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/observation/abstract_ml_agents.py", line 104, in update_observation
(wrapped_fn pid=43068)     predictions = self._infer_model(features)
(wrapped_fn pid=43068)   File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/observation/ego_centric_ml_agents.py", line 72, in _infer_model
(wrapped_fn pid=43068)     raise ValueError(f"Prediction does not have the output '{self.prediction_type}'")
(wrapped_fn pid=43068) ValueError: Prediction does not have the output 'agents_trajectory'
(wrapped_fn pid=43068) WARNING:nuplan.planning.simulation.runner.executor:Simulation failed with error:
(wrapped_fn pid=43068)  Prediction does not have the output 'agents_trajectory'
(wrapped_fn pid=43068) WARNING:nuplan.planning.simulation.runner.executor:
(wrapped_fn pid=43068) Failed simulation [log,token]:
(wrapped_fn pid=43068)  [2021.06.28.16.29.11_veh-38_01415_01821, 0a4ebece4a5858e0]
(wrapped_fn pid=43068) 
(wrapped_fn pid=43068) WARNING:nuplan.planning.simulation.runner.executor:----------- Simulation failed!
Ray objects: 100%|██████████| 10/10 [00:03<00:00,  2.75it/s]
2023-11-14 22:13:49,138 WARNING {/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/executor.py:123}  Failed Simulation.
 'Traceback (most recent call last):
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/executor.py", line 27, in run_simulation
    return sim_runner.run()
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/simulations_runner.py", line 117, in run
    self.simulation.propagate(trajectory)
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/simulation.py", line 172, in propagate
    self._observations.update_observation(iteration, next_iteration, self._history_buffer)
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/observation/abstract_ml_agents.py", line 104, in update_observation
    predictions = self._infer_model(features)
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/observation/ego_centric_ml_agents.py", line 72, in _infer_model
    raise ValueError(f"Prediction does not have the output '{self.prediction_type}'")
ValueError: Prediction does not have the output 'agents_trajectory'
'
2023-11-14 22:13:49,138 WARNING {/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/executor.py:123}  Failed Simulation.
 'Traceback (most recent call last):
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/executor.py", line 27, in run_simulation
    return sim_runner.run()
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/runner/simulations_runner.py", line 117, in run
    self.simulation.propagate(trajectory)
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/simulation.py", line 172, in propagate
    self._observations.update_observation(iteration, next_iteration, self._history_buffer)
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/observation/abstract_ml_agents.py", line 104, in update_observation
    predictions = self._infer_model(features)
  File "/home/ehdykhne/nuplan-devkit/nuplan/planning/simulation/observation/ego_centric_ml_agents.py", line 72, in _infer_model

Any help on this would be highly appreciated.

jessapinkman commented 10 months ago

hi, i met the same issue, i am going to try to use my own controller like IDM to control the background agents, but actually i dont know how. Have you ever tried before?

HenryDykhne commented 10 months ago

hi, i met the same issue, i am going to try to use my own controller like IDM to control the background agents, but actually i dont know how. Have you ever tried before?

I identified classes that seem designed to make this possible but I have not had any luck with getting them running. Please tell me if you are able to make any progress in this area.

jessapinkman commented 10 months ago

@HenryDykhne hey, I may find something new. I have train a simple palnner and run it in the simulation, and I save the simulation file like xx.yaml(in the attch files), this .yaml file maybe have all configurations in simulation. Additionally, I found the "observation" key in the .yaml, and value "target" point to the IDM policy of background agents, the code is as follows: `observation: target: nuplan.planning.simulation.observation.idm_agents.IDMAgents convert: all target_velocity: 10 min_gap_to_lead_agent: 1.0 headway_time: 1.5 accel_max: 1.0 decel_max: 2.0 open_loop_detections_types:

So, if we want use our own ML planner to control background agents , maybe should Inherit and override the "/nuplan/planning/simulation/observation/abstract_observation.AbstractObservation".

Finally, i am trying figure out how does it work. These are my speculation above. If there are any mistakes, you can point them out. I hope we can solve this problem during the discussion cause they may not reply very soon. simulation_vector_experiment.zip