Closed machengcheng2016 closed 2 years ago
Hi, thank you for your interest in our work!
Yes, you are correct. The mean mAP@0.25 and mAP@0.50 are respectively 0.329222 and 0.150687.
However, the results are averaged over 3 splits. This split gives an above average result.
Got it, thanks.
Actually I trained twice, and the first trial outputs (34.52, 16.80) and the second trial outputs (32.92, 15.07), with respect to (mAP@0.25, mAP@0.5). Well it is really strange to see such large variance on just fully-supervised training performance.
There is another question about pre-training. When the pre-training ends, should I use the best_checkpoint_sum.tar
in the next sh run_train.sh 0 train_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt best_checkpoint_sum.tar
?
Hi, sorry for the late reply. I missed your comment.
Large variance is because the small dataset size (only 5%). And yes, use the best checkpoint. Though it may not make a big difference.
Thanks for your reply!
Greetings! I do
sh run_pretrain.sh 1 pretrain_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt
. After the training is over, I dosh run_eval.sh 0 pretrain_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt pretrain_sunrgbd/checkpoint.tar
, and get output as followsDoes that mean mAP@0.25 and mAP@0.50 are respectively 0.329222 and 0.150687? If so, Table 1 says that mAP@0.25 and mAP@0.50 are respectively 29.9 and 10.5. Why?