SxJyJay / MSMDFusion

[CVPR 2023] MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection
Apache License 2.0
167 stars 10 forks source link

AssertionError: Samples in split doesn't match samples in predictions #11

Open jiumozhi123 opened 1 year ago

jiumozhi123 commented 1 year ago

Hi, I try to have a inference by fusion_voxel0075_R50.pth(from Baidu cloud storage) and transfusion_nusc_voxel_L.py(base line) When I run "python tools/test.py configs/transfusion_nusc_voxel_L.py checkpoints/fusion_voxel0075_R50.pth --eval bbox", the following error occur: 截图 2023-04-12 14-53-32 What I need to do for implement of this inference

SxJyJay commented 1 year ago

It seems that your validation set is not complete. Please double-check whether you download the complete nuscenes validation set. Besides, the checkpoint "fusion_voxel0075_R50.pth" merges pretrained transfusion-L and ResNet-50. Thus, you should load pure pre-trained transfusion-L checkpoint for LiDAR-only evaluation. We provide "fusion_voxel0075_R50.pth" to help users directly train the 2-nd MSMDFusion stage without being bothered with the 1-st LiDAR-only backbone pretraining.

SxJyJay commented 1 year ago

Maybe you can refer to this page

jiumozhi123 commented 1 year ago

It seems that your validation set is not complete. Please double-check whether you download the complete nuscenes validation set. Besides, the checkpoint "fusion_voxel0075_R50.pth" merges pretrained transfusion-L and ResNet-50. Thus, you should load pure pre-trained transfusion-L checkpoint for LiDAR-only evaluation. We provide "fusion_voxel0075_R50.pth" to help users directly train the 2-nd MSMDFusion stage without being bothered with the 1-st LiDAR-only backbone pretraining.

I'm sure that my nuscenes datasets is complete. Is it convenient for you to provide pre-trained transfusion-L checkpoint file? I hope to get the performance for base-line network in MSMDFusion. By the way, The nuscenes datasets which I inference include the "foreground_mixed_6nn_width_depth" folder for samples and sweeps. Is it have any influence for lidar-only inference? Thanks a lot!

SxJyJay commented 1 year ago

I cannot find the pre-trained TransFusion-L checkpoint file. You can extract the lidar part in fusioin_voxel0075_R50.pth. "FOREGROUND_MIXED_6NN_WITH_DEPTH" doesn't influence lidar-only inference.

jiumozhi123 commented 1 year ago

I find out the reason of error in inference task. Code "cfg.data.test.ann_file = 'data/nuscenes/nuscenes_infos_train.pkl'" in line 117 should be deprecated in "https://github.com/SxJyJay/MSMDFusion/blob/main/tools/test.py".

SxJyJay commented 1 year ago

I find out the reason of error in inference task. Code "cfg.data.test.ann_file = 'data/nuscenes/nuscenes_infos_train.pkl'" in line 117 should be deprecated in "https://github.com/SxJyJay/MSMDFusion/blob/main/tools/test.py".

Thanks for you pointing out this! I will fix this bug.