tusen-ai / SST

Code for a series of work in LiDAR perception, including SST (CVPR 22), FSD (NeurIPS 22), FSD++ (TPAMI 23), FSDv2, and CTRL (ICCV 23, oral).
Apache License 2.0
801 stars 102 forks source link

Question regarding reproducing SST results on waymo #94

Closed richardkxu closed 1 year ago

richardkxu commented 1 year ago

Hi, thank you for the great repo! I am trying to reproduce the SST results on waymoD5 dataset (load_interval=5) but got much lower metrics. Here are the results I got:

sst/sst_waymoD5_1x_3class_12heads.py results:

Vehicle/L1 mAP: 0.4463, Vehicle/L1 mAPH: 0.4406, Vehicle/L2 mAP: 0.3903, Vehicle/L2 mAPH: 0.3853, Pedestrian/L1 mAP: 0.5345, Pedestrian/L1 mAPH: 0.4239, Pedestrian/L2 mAP: 0.4612, Pedestrian/L2 mAPH: 0.3651, Sign/L1 mAP: 0.0000, Sign/L1 mAPH: 0.0000, Sign/L2 mAP: 0.0000, Sign/L2 mAPH: 0.0000, Cyclist/L1 mAP: 0.3997, Cyclist/L1 mAPH: 0.3892, Cyclist/L2 mAP: 0.3848, Cyclist/L2 mAPH: 0.3747, 
Overall/L1 mAP: 0.4602, Overall/L1 mAPH: 0.4179, Overall/L2 mAP: 0.4121, Overall/L2 mAPH: 0.3750

which is lower than the Overall/L1 mAP: 0.6797 mentioned in this issue.

sst_refactor/sst_waymoD5_1x_3class_8heads_v2.py results:

Vehicle/L1 mAP: 0.4355, Vehicle/L1 mAPH: 0.4299, Vehicle/L2 mAP: 0.3806, Vehicle/L2 mAPH: 0.3756, Pedestrian/L1 mAP: 0.5287, Pedestrian/L1 mAPH: 0.4172, Pedestrian/L2 mAP: 0.4561, Pedestrian/L2 mAPH: 0.3593, Sign/L1 mAP: 0.0000, Sign/L1 mAPH: 0.0000, Sign/L2 mAP: 0.0000, Sign/L2 mAPH: 0.0000, Cyclist/L1 mAP: 0.3911, Cyclist/L1 mAPH: 0.3804, Cyclist/L2 mAP: 0.3766, Cyclist/L2 mAPH: 0.3663, 
Overall/L1 mAP: 0.4518, Overall/L1 mAPH: 0.4092, Overall/L2 mAP: 0.4044, Overall/L2 mAPH: 0.3671

which is lower than the Overall/L1 mAP: 0.6734 mentioned in this issue.

I am running with 8 GPUs, 12 epochs, and the exact same config files as provided in the repo. I am using the mmdet==0.15.0 installed directly from your repo and the old version of waymo w/ old coordinate system generated from your create_data.py. I also run evaluation on some pretrained checkpoints and was able to get the same evaluation results on validation set. I think that partly verify my data and environment are correct.

Am I missing anything? Thank you so much!

Abyssaledge commented 1 year ago

Sorry for the late reply. According to I also run evaluation on some pretrained checkpoints and was able to get the same evaluation results on validation set., do you mean that you could obtain normal results using our checkpoint?

richardkxu commented 1 year ago

Yes, I was able to get the FSD results using your checkpoint.

Abyssaledge commented 1 year ago

How about training FSD from scratch? Does it work?

Abyssaledge commented 1 year ago

Please reopen this issue if you need further discussion.