biubug6 / Pytorch_Retinaface

Retinaface get 80.99% in widerface hard val using mobilenet0.25.
MIT License
2.63k stars 774 forks source link

inference time is longer than 5-6ms produced by author.And different size input, but the inference time is same. #79

Open Govan111 opened 4 years ago

Govan111 commented 4 years ago

When I run the detect.py which the default image size is 624x1024,the inference time is 14ms.And I resize the image size to 256x256,the inference time is also 14ms.So I have two questions: first is why the inference time is longer than 5-6ms produced by author?Second is why different size input but the inference time is same? And And I test in Tesla p100, pytorch=1.2.0, torchvision=0.4.0. I hope to get a reply,thanks!

DingtianX commented 4 years ago

the same problem as you . have you worked it out??

GengCauWong commented 4 years ago

the same problem as you . have you worked it out??

mobilenet-0.25在i7 cpu前传时间60ms左右,与作者给出的VGA图片CPU-1的时间17.2ms相差很多,请问您那边遇到这种情况了么?