Closed stigma0617 closed 5 years ago
@stigma0617 The inference time of MobileNet based models in the README might be not correct. I will re-test them soon.
@stigma0617 The inference times have been re-tested and updated in README.
@tianzhi0549
The updated result(45ms) is still faster than my result(61ms).
Could you share your command line and your hardware spec?
Did you run the model with TEST.IMS_PER_BATCH 1?
I really want to reach your fast result.
I used the following script.
MODEL=fcos_syncbn_bs32_c128_MNV2_FPN_1x
wget https://cloudstor.aarnet.edu.au/plus/s/3GKwaxZhDSOlCZ0/download -O ${MODEL}.pth
python tools/test_net.py \
--config-file configs/fcos/${MODEL}.yaml \
MODEL.WEIGHT ${MODEL}.pth \
TEST.IMS_PER_BATCH 1
My CPU is Intel(R) Xeon(R) Gold 6151 CPU @ 3.00GHz and GPU is V100 (16GB).
@tianzhi0549
My CPU is Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz which is not as good as yours.
Thanks for your reply.
Hi,
I tried to check inference time of 'FCOS_syncbn_bs32_c128_MNV2_FPN_1x' model on V100 GPU.
Environment
V100 GPU CUDA 10.0 cuDNN 7.3 nvidia driver 418.67
with this command.
CUDA_VISIBLE_DEVICES=0, python tools/test_net.py --config-file "configs/fcos/fcos_syncbn_bs32_c128_ms_MNV2_FPN_1x.yaml" TEST.IMS_PER_BATCH 1 MODEL.WEIGHT ./FCOS_syncbn_bs32_c128_MNV2_FPN_1x.pth
The result is below
This result is slower than your report(19ms).
How do you think this result?