opendilab / InterFuser

[CoRL 2022] InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
Apache License 2.0
511 stars 42 forks source link

Training with LMDrive #95

Open st3lzer opened 2 weeks ago

st3lzer commented 2 weeks ago

Hello,

Thank you for providing an extensive code base and training dataset.

I have a few questions regarding the dataset usage and training process:

Dataset Usage: You mentioned that the only difference between your dataset and the one used for training InterFuser is the sampling rate. I plan to use the RGB image (cropped into three) as inputs and the lidar_odd. Is that correct? How would you suggest dealing with the difference in sampling rate? Should I use the entire dataset but with fewer training epochs?

Training Parameters: Regarding the training parameters, which towns and weather conditions from the LMDrive dataset should be used for training and which should be used for validation to replicate your publication results as closely as possible?

Thank you!

deepcs233 commented 1 week ago

Hi!

Dataset Usage:

  1. Utilize both lidar_odd and lidar sensors concurrently as each sensor provides a unique 180-degree field of view. Combining both will give a comprehensive 360-degree perspective.
  2. If limited by time or computational resources, downsample the dataset by selecting frames at fixed intervals. For instance, use every fourth frame (e.g., 001_rgb.jpg, 004_rgb.jpg, etc.) to manage the dataset size effectively.

Training Parameters:

  1. Select multiple towns (excluding possibly Town05) and include all weather conditions for training. Ensure to review and adjust the settings as described in the Interfuser.