When I run the detect.py which the default image size is 624x1024,the inference time is 14ms.And I resize the image size to 256x256,the inference time is also 14ms.So I have two questions: first is why the inference time is longer than 5-6ms produced by author?Second is why different size input but the inference time is same? And And I test in Tesla p100, pytorch=1.2.0, torchvision=0.4.0. I hope to get a reply,thanks!
When I run the detect.py which the default image size is 624x1024,the inference time is 14ms.And I resize the image size to 256x256,the inference time is also 14ms.So I have two questions: first is why the inference time is longer than 5-6ms produced by author?Second is why different size input but the inference time is same? And And I test in Tesla p100, pytorch=1.2.0, torchvision=0.4.0. I hope to get a reply,thanks!