motional / nuplan-devkit

The devkit of the nuPlan dataset.
https://www.nuplan.org
Other
673 stars 129 forks source link

Questions about ego simulation result using log future controller #258

Closed Jeff09 closed 1 year ago

Jeff09 commented 1 year ago

Hi nuplan team,

I used the log_future_planner + perfect_tracking_controller for the ego prediction, it's supposed to have almost the full score at all three tasks in the nuplan_challenge_scenarios data set. However, in the closed_loop_reactive_agents task, the score only got around 0.8, and in the closed_loop_nonreactive_agents task, it got around 0.95. I wonder why the result is not good as supposed? Thank you in advance.

patk-motional commented 1 year ago

Hi @Jeff09,

The main reason is that we are using closed-loop metrics. They are meant to be a measure of how any planner drives. I suspect that closed_loop_reactive_agents scores lower because you are controlling the ego in "open-loop" by blindly following the expert trajectory. This could lead to collisions because the agent behavior has diverged from the log.

As for closed_loop_nonreactive_agents, it makes sense that the score is close to 1. However, there could be other metrics that score the driving behavior low. Comfort or driveable area compliance for instance.

The main takeaway here is that in closed-loop it is hard to quantify what perfect driving- score of 1.0 - is. The expert trajectory is not defined by having a score of 1.0 but rather by what the ego did during data collection. Hope that helps.

Jeff09 commented 1 year ago

Hi @patk-motional,

Thank you for your explanation. I still have two more questions.

First, why could it lead to collisions if I'm controlling the ego by blindly following the expert trajectory in the closed_loop_reactive_agents task? It doesn't make sense to me. Because like you said,

The expert trajectory is not defined by having a score of 1.0 but rather by what the ego did during data collection.

It's reasonable to say that there's at least no collisions happens during data collection, right? If this's the case, I would also expect at least there's no collision happens in the closed_loop_reactive_agents task. I wonder if the reason is that the other idm agents are not intelligent enough.

Second, if I use log_future_planner + two-stage controller in the simulation, the scores at both closed_loop_nonreactive_agents and closed_loop_reactive_agents decrease around 1.0 when comparing with the simulation using log_future_planner + perfect controller. May I ask is this reasonable and what is the reason?

Thank you so much.

patk-motional commented 1 year ago

To your first point, it could be that the smart agents drive slower than the recorded agents. This would lead to the ego rear colliding with a leading agent. However, the reduction in score can be due to other metrics. Please inspect the histogram tab in nuboard to see an in-depth breakdown.

The two-stage controller is a combination of a tracker (LQR) and a motion model (Kinematic bicycle model). It attempts to track the given trajectory. The idea is to approximate the low-level system of an AV architecture. However, the tracker is not perfect, and the log future planner is not aware of the divergence between the closed-loop state and the reference state it is commanding. This could lead to some degradation in metrics. Did you mean a decrease around 0.1?