jayLEE0301 / vq_bet_official

Official code for "Behavior Generation with Latent Actions" (ICML 2024 Spotlight)
https://sjlee.cc/vq-bet/
MIT License
113 stars 12 forks source link

input in NuScenes planning task #8

Open dichencd opened 1 month ago

dichencd commented 1 month ago

Thanks for the excellent work! I have some questions on the NuScenes planning task: You mentioned using future traj from GPT-Driver in vqbet issue 6 I wonder

  1. Whether the diffusion model based trajectory model also consumes the future trajectories?
  2. Have you tried using only historical trajectories of other agents in the driving task since future trajectories are not observable in reality?
  3. Thank you very much!

jayLEE0301 commented 1 month ago

Hello, sorry for the late reply.

Yes, both vq-bet and diffusion policy are using future trajectories in training phase. (please note that we don't use future trajectory in testing)

I didn't tried using only historical trajectory of ther agent, but I think it would be very interesting work (but might need more assumption that we have access to other vehicle's historical traj)

dichencd commented 3 weeks ago

Thank you very much for your reply! I have some further questions:

  1. I wonder if you mean ground-truth future trajectories or predicted trajectories from the trained GPT driver when you say using future trajectories in training.
  2. You mentioned future trajectories are not used in testing. I wonder no future trajectories of any sort (e.g. predicted future trajectories) are used in testing or no future ground truth trajectories and only predicted trajectories are used?
  3. Using other agents' history (position, velocity) is often used in prediction literature like https://arxiv.org/pdf/2207.05844. I am curious whether the future trajectories are used because the formulation of predicting ego trajectory conditioning on other agents' future makes more sense than conditioning on other agents' history?
jayLEE0301 commented 2 weeks ago
  1. When we train VQ-BeT, we used ground-truth future trajectories as labels, following the settings of GPT driver.
  2. Both GPT driver and VQ-BeT only predicts future trajectory, conditioned on current observation. (Both VQ-BeT and GPT driver are not doing open-loop controls. Thus, no future trajectories of any sort are used in testing.)
  3. We don't use any history of other agents. We only use the history of the ego agents.

Thank you:)