OpenDriveLab / ThinkTwice

[CVPR 2023] Pytorch implementation of ThinkTwice, a SOTA Decoder for End-to-end Autonomous Driving under BEV.
Apache License 2.0
187 stars 18 forks source link

Research details #4

Closed rockstarsir closed 1 year ago

rockstarsir commented 1 year ago

Hi Team,

I am starting my research in the autonomous driving field, could you help me with the following questions about your architecture.

  1. I would like to know what are the inputs that architecture expects and what exactly are the outputs generated from the architecture.
  2. I see a video in the youtube of the trained model, it looks amazing. I am just curious to know if you are predicting both future waypoints and control commands. If you are predicting only the future waypoints how are you managing to stop vehicle in case of any obstacle.

Thanks in advance

jiaxiaosong1002 commented 1 year ago

Thanns for your interestes.

  1. The inputs are images and point clouds. The outputs are planned trajectories and control signals. You could check the run_step function of our team agent for details.

  2. We predict both to combine the advantages of two output formats. Please check our previous work TCP for detials. The brake is conducted implicitly by end-to-end learning. Of course, people could set additional emergency braker for better safety.