er-muyue / DeFRCN

MIT License
182 stars 43 forks source link

How to reproduce the results in Figure4(b)? #60

Closed ry-jojo closed 1 year ago

ry-jojo commented 1 year ago

Hi there, thanks for sharing the codebase for such nice work. I was trying to reproduce the results of Figure 4(b) with the following config:

BASE:"../Base-RCNN.yaml" MODEL: WEIGHTS:"/Path/to/Base/Pretrain/Weight" MASK_ON:False BACKBONE: FREEZE:True RESNETS: DEPTH:101 RPN: ENABLE_DECOUPLE:True BACKWARD_SCALE: to be tuned FREEZE: False ROI_HEADS: ENABLE_DECOUPLE:True BACKWARD_SCALE: to be tuned NUM_CLASSES:20 FREEZE_FEAT:True CLS_DROPOUT:True DATASETS: TRAIN:("coco14_trainval_novel_10shot_seed0",) TEST:('coco14_test_novel',) SOLVER: IMS_PER_BATCH:16 BASE_LR:0.01 STEPS:(2000,) MAX_ITER:2500 CHECKPOINT_PERIOD:100000 WARMUP_ITERS:0 TEST: PCB_ENABLE:False PCB_MODELPATH:"/Path/to/ImageNet/Pre-Train/Weight" OUTPUT_DIR:"/Path/to/Output/Dir"

However, when tuning the RPN/ROI_HEAD backward_scale following Figure4(b), the AP seems to be non-sensitive to the scale change:

image

I think the GDL block is the core idea of this work. I am implementing this exps based on my understanding of the paper, thus I am not sure if my configuration setting is correct or not. Could you please help to give more config details on how to implement the exps for Figure4(b)? Thanks in advance!

ry-jojo commented 1 year ago

I think I have found the answer. I should unfreeze the backbone. I would close this issue now. Thanks!