Open PythonImageDeveloper opened 5 years ago
Hi, Check my thread on https://devtalk.nvidia.com/default/topic/1046492/tensorrt/extremely-long-time-to-load-trt-optimized-frozen-tf-graphs/1
Upgrading the protobuf might help. Good luck.
Hi PythonImageDeveloper,
Could you clarify which model you're experiencing 2.5 FPS? Are you running the pre-processing scripts contained in this repository, or using create_inference_engine
directly?
Best, John
Hi @jaybdub
I'm using the create_inference_engine
.
Hi everyone, I converted the ssdlite_mobilenetv2 and ssd_mobilenetv2 and ssd_resnet50 to TensorRT with Tensorflow API, and this API generated the .pb file. I used Tensorflow 1.13 and Jetpack 4.2, but the final inference time is not good. I achieved 2.5 FPS, this isn't real-time, and the loading model time is about 10 min, why?