I saved the session using:
tf.saved_model.simple_save(tf_sess, "./save_dir/", inputs={"tf_input":tf_input}, outputs={"tf_scores":tf_scores, "tf_boxes":tf_boxes,"tf_classes":tf_classes})
And called the session again in a different script using:
tf.saved_model.loader.load(tf_sess, [tag_constants.SERVING], "./save_dir/")
When running,
scores, boxes, classes = tf_sess.run([tf_scores, tf_boxes, tf_classes], feed_dict={tf_input:image_resized[None, ...]})
I get in average around 4-5fps.
Is there any advice to run inference on a video feed to achieve the frame rate mentioned in the README?
Many thanks!
PS: I did run before doing inference /jetson_clocks and nvpmodel -m 0
Hi,
I saved the session using:
tf.saved_model.simple_save(tf_sess, "./save_dir/", inputs={"tf_input":tf_input}, outputs={"tf_scores":tf_scores, "tf_boxes":tf_boxes,"tf_classes":tf_classes})
And called the session again in a different script using:
tf.saved_model.loader.load(tf_sess, [tag_constants.SERVING], "./save_dir/")
When running,
scores, boxes, classes = tf_sess.run([tf_scores, tf_boxes, tf_classes], feed_dict={tf_input:image_resized[None, ...]})
I get in average around 4-5fps.Is there any advice to run inference on a video feed to achieve the frame rate mentioned in the README?
Many thanks!
PS: I did run before doing inference /jetson_clocks and nvpmodel -m 0