Open VincentCheng24 opened 4 years ago
@VincentCheng24 can you please share your graph freezing code? I am trying to convert the model to OpenVINO IR format.
Is it similar to the following, or are your output_node_names different?:
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["superpoint/prob_nms", "superpoint/descriptors"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
@dinara92 if you get mo_tf.py working for superpoint model sp_v6? I tried to convert it too(I trained it and got the model.meta data) and got some error. Thanks!
@jennyzhang2018
@dinara92 if you get mo_tf.py working for superpoint model sp_v6? I tried to convert it too(I trained it and got the model.meta data) and got some error. Thanks!
I could not convert it using mo_tf.py model optimizer script. First, I used saved_model_dir param. with the sp_v6 and with input_shape param. =[1,480,640,1] (NHWC, Openvino standard image input format). The operation is not implemented for node "superpoint/pred_tower0/map/while/box_nms/Where". (This error comes up for both Openvino ver.2020.1 and ver.2020.4).
Then, I froze saved model into .pb (keeping all output nodes, and also only for "superpoint/prob_nms" output node), upgraded version of Openvino to ver.2020.4 (latest) --> error propagated to node "superpoint/pred_tower0/map/while/box_nms/GatherNd".
Generally, it means some operation is not implemented, so we need to cut the model and implement the rest of operations. If you manage to convert, please share your solution. Thank you
Has anyone figure how to serve Superpoint mode on OpenVino ?
After several tries of attempting to convert our checkpoint to .pb files, we were successful. However, when trying to load the .pb file to an inference script, we are getting the below error:
RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli
available_tags: [{'serve', 'train'}]
we are not getting these issues while we are loading the publicly available saved model, can you please tell us where we are going wrong!
TensorRT 7.1 on Jetson AGX Xavier generates the wrong results of the node converted from an ONNX Resize op (from opset 11) which is converted from a tf.image.resize_bilinear node TensorFlow frozen graph.
Does anyone have experience in this operation? Thanks a lot.
onnx model and TF frozen graph