After read the paper, I took a look at the code, especially for the GPT class, but I found something I am a little bit confused.
In the paper, it says the the input image is down-sampled to 522xC and LiDAR to 88C. If I understand correctly, for inference batch size, in your comments B, should be one? And why the input size of image is B4*seq_len, C, H, W in your comments, where does the number 4 come from? Maybe I misunderstood some thing.
Hi,
Thanks again for your contribution!
After read the paper, I took a look at the code, especially for the GPT class, but I found something I am a little bit confused.
def forward(self, image_tensor, lidar_tensor, velocity): """ Args: image_tensor (tensor): B*4*seq_len, C, H, W lidar_tensor (tensor): B*seq_len, C, H, W velocity (tensor): ego-velocity """
Best wishes! Thanks again!