SamsungLabs / fcaf3d

[ECCV2022] FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
MIT License
229 stars 37 forks source link

Performance on ScanNet #39

Closed WWW2323 closed 2 years ago

WWW2323 commented 2 years ago

Hi, awesome work! But I get some trouble when reproducing the results on the ScanNet dataset. Could you give me some advice? When I train FCAF3D with the following command:

bash tools/dist_train.sh configs/fcaf3d/fcaf3d_scannet-3d-18class.py 2

I get results as follows

+----------------+---------+---------+---------+---------+ | classes | AP_0.25 | AR_0.25 | AP_0.50 | AR_0.50 | +----------------+---------+---------+---------+---------+ | cabinet | 0.5393 | 0.9059 | 0.3704 | 0.7366 | | bed | 0.8674 | 0.9259 | 0.7992 | 0.8519 | | chair | 0.9542 | 0.9854 | 0.8964 | 0.9415 | | sofa | 0.9118 | 0.9794 | 0.7914 | 0.9381 | | table | 0.7015 | 0.8971 | 0.6479 | 0.8029 | | door | 0.6186 | 0.9229 | 0.4389 | 0.7195 | | window | 0.5750 | 0.8865 | 0.3336 | 0.5993 | | bookshelf | 0.5852 | 0.8831 | 0.5157 | 0.7922 | | picture | 0.2571 | 0.5721 | 0.1564 | 0.3649 | | counter | 0.6106 | 0.8846 | 0.2338 | 0.5385 | | desk | 0.7157 | 0.9685 | 0.5472 | 0.8819 | | curtain | 0.5929 | 0.8955 | 0.4346 | 0.7015 | | refrigerator | 0.4901 | 0.8772 | 0.4010 | 0.8246 | | showercurtrain | 0.8306 | 0.9643 | 0.4340 | 0.7857 | | toilet | 1.0000 | 1.0000 | 0.9378 | 0.9655 | | sink | 0.8543 | 0.9592 | 0.5197 | 0.6837 | | bathtub | 0.8684 | 0.9032 | 0.8299 | 0.8710 | | garbagebin | 0.6439 | 0.8717 | 0.5762 | 0.7604 | +----------------+---------+---------+---------+---------+ | Overall | 0.7009 | 0.9046 | 0.5480 | 0.7644 | +----------------+---------+---------+---------+---------+

There is about 0.6 mAP gap between my result (70.09) and paper result (70.7) on AP 0.25 metric, and there is about 1.2 mAP gap between my result (54.8) and paper result (56.0) on AP 0.5 metric. Is this variance? or do I need to modify something to achieve comparable results with paper? Thanks ~

filaPro commented 2 years ago

Hi @WWW2323 , Does it duplicate #37?

WWW2323 commented 2 years ago

Hi, my issue seems different from #37. #37 directly uses the checkpoint you provide to evaluate on ScanNet and I use the checkpoint trained by myself to evaluate~

filaPro commented 2 years ago

Anyway I believe it'a just varience. Have you tried training 5 times and test 5 times with different random seeds like in our tools/test5x5.py?

WWW2323 commented 2 years ago

Thanks for your reply~ I will try to train 5 times and test 5 times with different random seeds.