PKUZHOU / MTCNN_FaceDetection_TensorRT

MTCNN C++ implementation with NVIDIA TensorRT Inference accelerator SDK
202 stars 57 forks source link

inference speed too slow #4

Closed zacheH closed 5 years ago

zacheH commented 5 years ago

I run your demo in a tensorRT5.0 docker image, found the speed of inference on your 4.jpg was too slow. My environment is: ubuntu16.04 + cuda9.0 + cudnn7.3.1 + tensorRT5.0. Here is the Log:

Start generating TenosrRT runtime models End generating TensorRT runtime models first model inference time is 0.842 first model inference time is 0.511 first model inference time is 0.396 first model inference time is 0.313 first model inference time is 0.296 first model inference time is 0.266 first model inference time is 0.254 first time is 3.134 second time is 13.168 third time is 7.437 first model inference time is 0.612 first model inference time is 0.431 first model inference time is 0.344 first model inference time is 0.282 first model inference time is 0.266 first model inference time is 0.251 first model inference time is 0.269 first time is 2.672 second time is 15.089 third time is 7.409 time is 25.31

Do you have any idea about this?

zacheH commented 5 years ago

My bad, close it now.