autonomousvision / transfuser

[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
MIT License
1.04k stars 175 forks source link

Need some help about understanding the code #203

Closed Oliverwang11 closed 3 months ago

Oliverwang11 commented 3 months ago

Hi,

Thanks again for your contribution!

After reading the paper, I took a look at the code, especially for the GPT class, but I found something I am a little bit confused about.

  1. In the paper, it says that the input image is down-sampled to 5x22xC and LiDAR to 8x8xC. If I understand correctly, for inference batch size, in your comments B, should be one? And why the input size of the image is Bx4xseq_len, C, H, W in your comments, where does the number 4 come from? Maybe I misunderstood something.

def forward(self, image_tensor, lidar_tensor, velocity): """ Args: image_tensor (tensor): B*4*seq_len, C, H, W lidar_tensor (tensor): B*seq_len, C, H, W velocity (tensor): ego-velocity """

Best wishes! Thanks again!

Kait0 commented 3 months ago

I think it's just a typo.

Oliverwang11 commented 3 months ago

Thanks for your reply, so the batch size, B, during inference is one right?

Kait0 commented 3 months ago

yes

Oliverwang11 commented 3 months ago

Thanks a lot!