tianzhi0549 / FCOS

FCOS: Fully Convolutional One-Stage Object Detection (ICCV'19)
https://arxiv.org/abs/1904.01355
Other
3.28k stars 630 forks source link

FCOS_R_50_FPN_1x mAP 36.59 #102

Closed WitLes closed 5 years ago

WitLes commented 5 years ago

Hey, I have run your baseline config "fcos_R_50_FPN_1x.yaml" without any change, but only got 36.59 mAP as reported in 4.1.2 in the paper. But I noticed that you report 37.1 mAP of the FCOS_R_50_FPN_1x baseline in README.md. Is there any small details I have not noticed? Thanks!

tianzhi0549 commented 5 years ago

@WitLes Are you using the latest code and models?

WitLes commented 5 years ago

yes. I downloaded your codes several days ago and ran your fcos_R_50_FPN_1x.yaml setting with imagenet pretrained models. I trained the model on coco_2017_train(115k) for two times and got the same mAP 36.59 on coco_2017_val(5k). I also downloaded your model in the README.md and tested it, getting 37.06 mAP on coco_2017_val. So it is not caused by the evaluation scripts.

tianzhi0549 commented 5 years ago

@WitLes It might be because of the multi-GPU training. Please try to train the model with 4 GPUs instead of 8 GPUs.

WitLes commented 5 years ago

4GPU, 4 images per GPU? I will try this setting and report it here later.

tianzhi0549 commented 5 years ago

@WitLes Yes.

WitLes commented 5 years ago

I have run 4gpus(GeForce RTX 2080 Ti) config and got 36.809 mAP finally. I wonder if the drop of mAP is caused by hardware or packages version. My config is: ` MODEL: META_ARCHITECTURE: "GeneralizedRCNN"

WEIGHT: "catalog://ImageNetPretrained/MSRA/R-50" RPN_ONLY: True FCOS_ON: True BACKBONE: CONV_BODY: "R-50-FPN-RETINANET" RESNETS: BACKBONE_OUT_CHANNELS: 256 RETINANET: USE_C5: False # FCOS uses P5 instead of C5 DATASETS: TRAIN: ("coco_2017_train", ) TEST: ("coco_2017_val",) INPUT: MIN_SIZE_TRAIN: (800,) MAX_SIZE_TRAIN: 1333 MIN_SIZE_TEST: 800 MAX_SIZE_TEST: 1333 DATALOADER: SIZE_DIVISIBILITY: 32 SOLVER: BASE_LR: 0.01 WEIGHT_DECAY: 0.0001 STEPS: (60000, 80000) MAX_ITER: 90000 IMS_PER_BATCH: 16 WARMUP_METHOD: "constant" OUTPUT_DIR: '/data/fcos_outputs/baseline_4gpu' `

final mAP: ` Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.368

Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.554 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.396 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.407 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.486 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.313 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.513 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.549 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.334 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.597 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.719 `

tianzhi0549 commented 5 years ago

@WitLes I am not sure. But do you use python3? We used python2.

WitLes commented 5 years ago

Yes..... I use python3.7.3 with pytorch 1.1.0. Maybe that's the problem. Still thanks. I will try other configs to see if different models meet the same issue. And also Python2.7

WitLes commented 5 years ago

I have run "fcos_R_50_FPN_1x.yaml" in [Anaconda2, Python2.7, 4 * 2080ti GPU(4 imgs/gpu), cuda9.0] and finally got 36.9 mAP. Is this deviation acceptable in your view?

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.369 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.555 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.397 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.211 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.410 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.515 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.549 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.340 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.602 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.706

tianzhi0549 commented 5 years ago

@WitLes I think it is acceptable. But multiple runs yield the same mAP on my side. The difference might be due to the different versions of packages.

WitLes commented 5 years ago

OK. Thank you so much for your reply and advice.

haibochina commented 5 years ago

好。非常感谢您的答复和建议。

Hi, Have you achieved the high accuracy as the paper of FCOS now?