csuhan / opendet2

Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)
https://arxiv.org/abs/2203.14911
100 stars 11 forks source link

Reproducibility issue #13

Open misraya opened 2 years ago

misraya commented 2 years ago

Hi,

Amazing work on open-set detection! I trained the model after doing the dataset separation steps you suggest, and with exact same configs. The only difference is that I used 1 GPU instead of 8 GPUs, and these are the results I obtained. Interestingly, WI and AOSE metrics are worse, but AP is better. Do you think this much difference is expected just from using fewer GPUs, or is there some other issue I need to look for? Thanks in advance.

VOC-COCO-20 Result WI ↓ AOSE ↓ AP u↑
Paper 14.95 11286 14.93
Reproduced 20.68 13370 21.36
VOC-COCO-0.5n
Result
WI ↓ AOSE ↓ AP u↑
Paper 6.44 3944 9.05
Reproduced 55 5369 18.09
csuhan commented 2 years ago

Hi~ Did you adjust the learning rate and total training iterations, or you just train the model with batch size=16 with 1 GPU?

misraya commented 2 years ago

Hi, sorry for my late response. Since 8 GPUs split the batch, and batch_size=16 could fit into 1 GPU I'm using, keeping the batch size fixed, I decided not to adjust the number of iterations or the learning rate. Would you suggest any particular adjustment?

csuhan commented 2 years ago

Can you reproduce the results of baseline method Faster R-CNN?

proxymallick commented 1 year ago

@csuhan Great paper and results! Hi @misraya , Thanks for raising the issue.

This is what I am getting when I run and everything else the same as downloaded from the Repo

python tools/train_net.py --num-gpus 1 --config-file configs/faster_rcnn_R_50_FPN_3x_opendet.yaml

[08/13 16:06:05] opendet2.evaluation.pascal_voc_evaluation INFO: Evaluating voc_2007_test using 2007 metric. mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
77.90 0.00 0.00 77.90 28.61 91.20 78.83 26.98 90.58
[08/13 16:14:39] opendet2.evaluation.pascal_voc_evaluation INFO: Evaluating voc_coco_20_40_test using 2012 metric. mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
13.89 13.06 11584.00 55.64 19.85 73.26 12.39 23.10 29.52
[08/13 16:24:12] opendet2.evaluation.pascal_voc_evaluation INFO: Evaluating voc_coco_20_60_test using 2012 metric. mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
13.25 15.80 17597.00 53.23 17.95 72.45 9.12 26.78 20.33
08/13 16:51:08] opendet2.evaluation.pascal_voc_evaluation INFO: Evaluating voc_coco_5000_test using 2012 metric. mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
17.95 11.58 8479.00 72.15 17.67 91.20 10.77 21.85 26.35
[08/13 17:01:26] opendet2.evaluation.pascal_voc_evaluation INFO: Evaluating voc_coco_10000_test using 2012 metric. mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
16.75 18.33 17113.00 67.21 13.33 91.20 12.98 30.13 26.07
[08/13 17:17:29] opendet2.evaluation.pascal_voc_evaluation INFO: Evaluating voc_coco_20000_test using 2012 metric. mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
15.23 25.07 34141.00 60.93 9.26 91.20 14.85 36.77 26.14