facebookresearch / votenet

Deep Hough Voting for 3D Object Detection in Point Clouds
MIT License
1.7k stars 376 forks source link

Getting lower result using pretrained model #28

Closed dashidhy closed 5 years ago

dashidhy commented 5 years ago

Dear authors,

I tried to evaluate ScanNet-pretrained model you provided as illustrated in REANDME and I got 57.3156 mAP@0.25 and 34.0198 mAP@0.5, but according to issue #11 this result is obvious lower than expected, even lower than my self-reproduced model on ScanNet which could reach 57.6734 mAP@0.25 and 34.3470 mAP@0.5.

This is quite weird that the same pretrained model on the same dataset gives different results and I can't tell what has happened here. Now I have a guessing that this may relate to data corruption or something like that because I experienced a network cut-off when downloading ScanNet, but I'm not sure about it. Could you please provide some ideas of this?

---- log_eval of pretrained model ----

Namespace(DUMP_DIR='demo_files/eval_pretrained_scannet', ap_iou_thresholds='0.25,0.5', batch_size=8, checkpoint_path='demo_files/pretrained_votenet_on_scannet.tar', cluster_sampling='seed_fps', conf_thresh=0.05, dataset='scannet', dump_dir='demo_files/eval_pretrained_scannet', faster_eval=False, model='votenet', nms_iou=0.25, no_height=False, num_point=40000, num_target=256, per_class_proposal=True, shuffle_dataset=False, use_3d_nms=True, use_cls_nms=True, use_color=False, use_old_type_nms=False, use_sunrgbd_v2=False, vote_factor=1) Loaded checkpoint demo_files/pretrained_votenet_on_scannet.tar (epoch: 120) 2019-09-25 07:57:06.985165 eval mean box_loss: 0.132122 eval mean center_loss: 0.041772 eval mean heading_cls_loss: 0.000000 eval mean heading_reg_loss: 0.000000 eval mean loss: 6.451989 eval mean neg_ratio: 0.421337 eval mean obj_acc: 0.847017 eval mean objectness_loss: 0.116719 eval mean pos_ratio: 0.335211 eval mean sem_cls_loss: 0.498210 eval mean size_cls_loss: 0.498671 eval mean size_reg_loss: 0.040483 eval mean vote_loss: 0.404896 eval cabinet Average Precision: 0.360204 eval bed Average Precision: 0.880230 eval chair Average Precision: 0.872860 eval sofa Average Precision: 0.908269 eval table Average Precision: 0.586539 eval door Average Precision: 0.464863 eval window Average Precision: 0.356241 eval bookshelf Average Precision: 0.414334 eval picture Average Precision: 0.067318 eval counter Average Precision: 0.506180 eval desk Average Precision: 0.654182 eval curtain Average Precision: 0.439923 eval refrigerator Average Precision: 0.472297 eval showercurtrain Average Precision: 0.526015 eval toilet Average Precision: 0.956763 eval sink Average Precision: 0.549155 eval bathtub Average Precision: 0.910629 eval garbagebin Average Precision: 0.390812 eval mAP: 0.573156 eval cabinet Recall: 0.750000 eval bed Recall: 0.950617 eval chair Recall: 0.912281 eval sofa Recall: 0.989691 eval table Recall: 0.822857 eval door Recall: 0.704497 eval window Recall: 0.609929 eval bookshelf Recall: 0.844156 eval picture Recall: 0.238739 eval counter Recall: 0.846154 eval desk Recall: 0.929134 eval curtain Recall: 0.731343 eval refrigerator Recall: 0.964912 eval showercurtrain Recall: 0.821429 eval toilet Recall: 1.000000 eval sink Recall: 0.724490 eval bathtub Recall: 0.967742 eval garbagebin Recall: 0.694340 eval AR: 0.805684 eval cabinet Average Precision: 0.064891 eval bed Average Precision: 0.799895 eval chair Average Precision: 0.674196 eval sofa Average Precision: 0.681848 eval table Average Precision: 0.446467 eval door Average Precision: 0.152873 eval window Average Precision: 0.088181 eval bookshelf Average Precision: 0.286832 eval picture Average Precision: 0.015800 eval counter Average Precision: 0.140754 eval desk Average Precision: 0.310386 eval curtain Average Precision: 0.138233 eval refrigerator Average Precision: 0.217701 eval showercurtrain Average Precision: 0.105123 eval toilet Average Precision: 0.751489 eval sink Average Precision: 0.256540 eval bathtub Average Precision: 0.848165 eval garbagebin Average Precision: 0.144188 eval mAP: 0.340198 eval cabinet Recall: 0.341398 eval bed Recall: 0.876543 eval chair Recall: 0.751462 eval sofa Recall: 0.804124 eval table Recall: 0.628571 eval door Recall: 0.376874 eval window Recall: 0.212766 eval bookshelf Recall: 0.662338 eval picture Recall: 0.049550 eval counter Recall: 0.307692 eval desk Recall: 0.637795 eval curtain Recall: 0.223881 eval refrigerator Recall: 0.596491 eval showercurtrain Recall: 0.285714 eval toilet Recall: 0.827586 eval sink Recall: 0.387755 eval bathtub Recall: 0.870968 eval garbagebin Recall: 0.375472 eval AR: 0.512054

NUAAXQ commented 5 years ago

In this issue, the author says the result may vary in a small range. Maybe you should test more times and see if you can get a better result or not.

dashidhy commented 5 years ago

@NUAAXQ Make sense. I run evaluation again and get expected results. Thanks!