yezhen17 / 3DIoUMatch

[CVPR 2021] PyTorch implementation of 3DIoUMatch: Leveraging IoU Prediction for Semi-Supervised 3D Object Detection.
155 stars 16 forks source link

mAP on sunrgbd on pretrained model #18

Closed machengcheng2016 closed 2 years ago

machengcheng2016 commented 2 years ago

Greetings! I do sh run_pretrain.sh 1 pretrain_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt. After the training is over, I do sh run_eval.sh 0 pretrain_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt pretrain_sunrgbd/checkpoint.tar, and get output as follows

---------- iou_thresh: 0.250000 ---------- eval bed Average Precision: 0.719783 eval table Average Precision: 0.359931 eval sofa Average Precision: 0.414853 eval chair Average Precision: 0.582705 eval toilet Average Precision: 0.747250 eval desk Average Precision: 0.077238 eval dresser Average Precision: 0.031235 eval night_stand Average Precision: 0.235248 eval bookshelf Average Precision: 0.006786 eval bathtub Average Precision: 0.117187 eval mAP: 0.329222 eval bed Recall: 0.874770 eval table Recall: 0.749693 eval sofa Recall: 0.776213 eval chair Recall: 0.755451 eval toilet Recall: 0.927152 eval desk Recall: 0.659616 eval dresser Recall: 0.356481 eval night_stand Recall: 0.681102 eval bookshelf Recall: 0.207358 eval bathtub Recall: 0.615385 eval AR: 0.660322 ---------- iou_thresh: 0.500000 ---------- eval bed Average Precision: 0.392357 eval table Average Precision: 0.102301 eval sofa Average Precision: 0.225640 eval chair Average Precision: 0.306473 eval toilet Average Precision: 0.401577 eval desk Average Precision: 0.005982 eval dresser Average Precision: 0.003862 eval night_stand Average Precision: 0.038365 eval bookshelf Average Precision: 0.000176 eval bathtub Average Precision: 0.030140 eval mAP: 0.150687 eval bed Recall: 0.530387 eval table Recall: 0.285072 eval sofa Recall: 0.414710 eval chair Recall: 0.453349 eval toilet Recall: 0.589404 eval desk Recall: 0.143138 eval dresser Recall: 0.069444 eval night_stand Recall: 0.185039 eval bookshelf Recall: 0.013378 eval bathtub Recall: 0.269231 eval AR: 0.295315

Does that mean mAP@0.25 and mAP@0.50 are respectively 0.329222 and 0.150687? If so, Table 1 says that mAP@0.25 and mAP@0.50 are respectively 29.9 and 10.5. Why? image

yezhen17 commented 2 years ago

Hi, thank you for your interest in our work!

Yes, you are correct. The mean mAP@0.25 and mAP@0.50 are respectively 0.329222 and 0.150687.

However, the results are averaged over 3 splits. This split gives an above average result.

machengcheng2016 commented 2 years ago

Got it, thanks. Actually I trained twice, and the first trial outputs (34.52, 16.80) and the second trial outputs (32.92, 15.07), with respect to (mAP@0.25, mAP@0.5). Well it is really strange to see such large variance on just fully-supervised training performance. There is another question about pre-training. When the pre-training ends, should I use the best_checkpoint_sum.tar in the next sh run_train.sh 0 train_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt best_checkpoint_sum.tar?

yezhen17 commented 2 years ago

Hi, sorry for the late reply. I missed your comment.

Large variance is because the small dataset size (only 5%). And yes, use the best checkpoint. Though it may not make a big difference.

machengcheng2016 commented 2 years ago

Thanks for your reply!