Hello, I encountered with some strange problems when traing with voxelnet_late_fusion.yaml and second_late_fusion.yaml,I found that when train with batch_size of 1, everything works fine for both training and inferencing. However, when I use a batch_size bigger than 1 (for example, 2) for training, I get the following error when calculating the loss on the validation set:
As you can seen, I printed ouput_dict['psm'].shape,ouput_dict['rm'].shape,batch_data['ego']['label_dict']['targets'].shape,batch_data['ego']['label_dict']['pos_equal_one'].shape
I found that the output of the model is based on batch_size as 2, but the true value in batch_data is 1,It seems that there is a mismatch in the data loading part of LatefusionDataset. Can you help me solve this problem? How can I fix it?
Hello, I encountered with some strange problems when traing with voxelnet_late_fusion.yaml and second_late_fusion.yaml,I found that when train with batch_size of 1, everything works fine for both training and inferencing. However, when I use a batch_size bigger than 1 (for example, 2) for training, I get the following error when calculating the loss on the validation set:
As you can seen, I printed ouput_dict['psm'].shape,ouput_dict['rm'].shape,batch_data['ego']['label_dict']['targets'].shape,batch_data['ego']['label_dict']['pos_equal_one'].shape
I found that the output of the model is based on batch_size as 2, but the true value in batch_data is 1,It seems that there is a mismatch in the data loading part of LatefusionDataset. Can you help me solve this problem? How can I fix it?