Closed chence17 closed 2 years ago
Hi @chence17 ,
For the paper we ran each experiment as 5 training runs and 5 test runs. And then report the mean and the maximum values from mAP@0.5 and mAP@0.25. Here we provide just one of these models, which is about 70.58 and 57.28 on both metrics according to the provided log file. I don't think that some specific bugs are the reason of your gap. And if you run the training for 5 times you can get the model with 71.5 mAP@0.5.
OK, I will try to run the training for 5 times. Thanks.
Hi,
I have run the code on my local server, but get a little different mAP results compared to the values reported in the paper. I guess maybe this is due to the dataset and dataloader? Here is the experiment.
I eval your released model on ScanNet V2 Val and get this result:
On ScanNet, The mAP@0.5 in paper is 0.715 but the eval result is 0.7063 and mAP@0.25 in paper is 0.573 but the eval result is 0.5661. I have run many times, the mAP@0.5 is always around 0.707 and mAP@0.25 is always around 0.566. Maybe this is due to the dataset and dataloader. I want to figure out why there is a gap.
My experiment enviroment is: