SamsungLabs / fcaf3d

[ECCV2022] FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
MIT License
231 stars 37 forks source link

mAP result question #37

Closed chence17 closed 2 years ago

chence17 commented 2 years ago

Hi,

I have run the code on my local server, but get a little different mAP results compared to the values reported in the paper. I guess maybe this is due to the dataset and dataloader? Here is the experiment.

I eval your released model on ScanNet V2 Val and get this result:

image

On ScanNet, The mAP@0.5 in paper is 0.715 but the eval result is 0.7063 and mAP@0.25 in paper is 0.573 but the eval result is 0.5661. I have run many times, the mAP@0.5 is always around 0.707 and mAP@0.25 is always around 0.566. Maybe this is due to the dataset and dataloader. I want to figure out why there is a gap.

My experiment enviroment is:

filaPro commented 2 years ago

Hi @chence17 ,

For the paper we ran each experiment as 5 training runs and 5 test runs. And then report the mean and the maximum values from mAP@0.5 and mAP@0.25. Here we provide just one of these models, which is about 70.58 and 57.28 on both metrics according to the provided log file. I don't think that some specific bugs are the reason of your gap. And if you run the training for 5 times you can get the model with 71.5 mAP@0.5.

chence17 commented 2 years ago

OK, I will try to run the training for 5 times. Thanks.