MCZhi / DIPP

[TNNLS] Differentiable Integrated Prediction and Planning Framework for Urban Autonomous Driving
https://mczhi.github.io/DIPP/
197 stars 40 forks source link

Result in the paper #4

Closed Kguo-cs closed 1 year ago

Kguo-cs commented 1 year ago

Could you provide the pretrained model for reproducing your close-loop result in the paper? Thank you.

MCZhi commented 1 year ago

Sorry, due to restrictions of the Waymo Dataset License Agreement, we are not allowed to share the trained model. If you have any questions when reproducing the training process, please feel free to open an issue and contact us.

Kguo-cs commented 1 year ago

Thank you for your answer. Could you tell me which 100 replay scenes do you use for testing?

Kguo-cs commented 1 year ago

And the 10156 driving scnes for training.

MCZhi commented 1 year ago

The replay scenes are from uncompressed_scenario_training_20s_training_20s.tfrecord-00299-of-01000 and uncompressed_scenario_training_20s_training_20s.tfrecord-00306-of-01000 files. For training, we randomly select 90 files from the datasets (around 10k scenes and 113k data points), and for validation, we randomly select 20 files other than the training files.

Kguo-cs commented 1 year ago

Thank you for your help again. But I find there are 75 scenes from uncompressed_scenario_training_20s_training_20s.tfrecord-00299-of-01000 and 58 scenes from uncompressed_scenario_training_20s_training_20s.tfrecord-00306-of-01000. The sum is 133 (more than 100). How do you handle it?

MCZhi commented 1 year ago

I manually removed some scenes that are not of interest such as waiting at a red light and keep 100 scenes for evaluation.