NVIDIA-AI-IOT / tf_trt_models

TensorFlow models accelerated with NVIDIA TensorRT
BSD 3-Clause "New" or "Revised" License
681 stars 245 forks source link

very slow inference result on Jetson TX2 #51

Open PythonImageDeveloper opened 5 years ago

PythonImageDeveloper commented 5 years ago

Hi everyone, I converted the ssdlite_mobilenetv2 and ssd_mobilenetv2 and ssd_resnet50 to TensorRT with Tensorflow API, and this API generated the .pb file. I used Tensorflow 1.13 and Jetpack 4.2, but the final inference time is not good. I achieved 2.5 FPS, this isn't real-time, and the loading model time is about 10 min, why?

filipski commented 5 years ago

Hi, Check my thread on https://devtalk.nvidia.com/default/topic/1046492/tensorrt/extremely-long-time-to-load-trt-optimized-frozen-tf-graphs/1

Upgrading the protobuf might help. Good luck.

jaybdub commented 5 years ago

Hi PythonImageDeveloper,

Could you clarify which model you're experiencing 2.5 FPS? Are you running the pre-processing scripts contained in this repository, or using create_inference_engine directly?

Best, John

PythonImageDeveloper commented 5 years ago

Hi @jaybdub I'm using the create_inference_engine.