NVIDIA-AI-IOT / tf_trt_models

TensorFlow models accelerated with NVIDIA TensorRT
BSD 3-Clause "New" or "Revised" License
684 stars 244 forks source link

Very slow inference on video feed #10

Closed hirovi closed 6 years ago

hirovi commented 6 years ago

Hi,

I saved the session using: tf.saved_model.simple_save(tf_sess, "./save_dir/", inputs={"tf_input":tf_input}, outputs={"tf_scores":tf_scores, "tf_boxes":tf_boxes,"tf_classes":tf_classes})

And called the session again in a different script using: tf.saved_model.loader.load(tf_sess, [tag_constants.SERVING], "./save_dir/")

When running, scores, boxes, classes = tf_sess.run([tf_scores, tf_boxes, tf_classes], feed_dict={tf_input:image_resized[None, ...]}) I get in average around 4-5fps.

Is there any advice to run inference on a video feed to achieve the frame rate mentioned in the README?

Many thanks!

PS: I did run before doing inference /jetson_clocks and nvpmodel -m 0

hirovi commented 6 years ago

Never mind, I changed the way the session was being called and it runs now at 15fps.

atyshka commented 5 years ago

@hirovi How did you change it? I am having the same issue