tianzhi0549 / FCOS

FCOS: Fully Convolutional One-Stage Object Detection (ICCV'19)
https://arxiv.org/abs/1904.01355
Other
3.28k stars 630 forks source link

Inference speed on MobileNetV2 #141

Closed stigma0617 closed 5 years ago

stigma0617 commented 5 years ago

Hi,

I tried to check inference time of 'FCOS_syncbn_bs32_c128_MNV2_FPN_1x' model on V100 GPU.

Environment

V100 GPU CUDA 10.0 cuDNN 7.3 nvidia driver 418.67

with this command. CUDA_VISIBLE_DEVICES=0, python tools/test_net.py --config-file "configs/fcos/fcos_syncbn_bs32_c128_ms_MNV2_FPN_1x.yaml" TEST.IMS_PER_BATCH 1 MODEL.WEIGHT ./FCOS_syncbn_bs32_c128_MNV2_FPN_1x.pth

The result is below

2019-09-16 15:45:14,878 fcos_core.inference INFO: Total run time: 0:05:39.123327 (0.06782466535568238 s / img per device, on 1 devices) 2019-09-16 15:45:14,878 fcos_core.inference INFO: Model inference time: 0:05:08.588011 (0.061717602157592776 s / img per device, on 1 devices)

This result is slower than your report(19ms).

How do you think this result?

tianzhi0549 commented 5 years ago

@stigma0617 The inference time of MobileNet based models in the README might be not correct. I will re-test them soon.

tianzhi0549 commented 5 years ago

@stigma0617 The inference times have been re-tested and updated in README.

stigma0617 commented 5 years ago

@tianzhi0549

The updated result(45ms) is still faster than my result(61ms).

Could you share your command line and your hardware spec?

Did you run the model with TEST.IMS_PER_BATCH 1?

I really want to reach your fast result.

tianzhi0549 commented 5 years ago

I used the following script.

MODEL=fcos_syncbn_bs32_c128_MNV2_FPN_1x
wget https://cloudstor.aarnet.edu.au/plus/s/3GKwaxZhDSOlCZ0/download -O ${MODEL}.pth

python tools/test_net.py \
    --config-file configs/fcos/${MODEL}.yaml \
    MODEL.WEIGHT ${MODEL}.pth \
    TEST.IMS_PER_BATCH 1

My CPU is Intel(R) Xeon(R) Gold 6151 CPU @ 3.00GHz and GPU is V100 (16GB).

stigma0617 commented 5 years ago

@tianzhi0549

My CPU is Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz which is not as good as yours.

Thanks for your reply.