onnx / tensorflow-onnx

Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Apache License 2.0
2.31k stars 433 forks source link

ValueError: get tensor value: 'loop_body_1_nonmaxsuppressionv5_pfor_while_loop_body_1_nonmaxsuppressionv5_soft_nms_sigma_0' must be Const #1607

Closed sniperHJJ closed 3 years ago

sniperHJJ commented 3 years ago

# python3 -m tf2onnx.convert --saved-model ./ --opset 12 --output model.onnx /usr/lib/python3.6/runpy.py:125: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour warn(RuntimeWarning(msg)) 2021-07-09 09:17:14,903 - WARNING - '--tag' not specified for saved_model. Using --tag serve 2021-07-09 09:17:26,915 - INFO - Signatures found in model: [serving_default]. 2021-07-09 09:17:26,915 - WARNING - '--signature_def' not specified, using first signature: serving_default 2021-07-09 09:17:26,919 - INFO - Output names: ['output_0', 'output_1', 'output_2', 'output_3'] WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tf2onnx/tf_loader.py:662: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph 2021-07-09 09:17:30,645 - WARNING - From /usr/local/lib/python3.6/dist-packages/tf2onnx/tf_loader.py:662: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph 2021-07-09 09:17:31,701 - INFO - Using tensorflow=2.2.0, onnx=1.7.0, tf2onnx=1.10.0/448d61 2021-07-09 09:17:31,701 - INFO - Using opset <onnx, 12> 2021-07-09 09:17:33,150 - INFO - Computed 0 values for constant folding 2021-07-09 09:17:33,164 - INFO - Computed 0 values for constant folding 2021-07-09 09:17:33,168 - INFO - Computed 0 values for constant folding 2021-07-09 09:17:33,174 - INFO - Computed 0 values for constant folding 2021-07-09 09:17:34,011 - INFO - Computed 0 values for constant folding 2021-07-09 09:17:35,488 - ERROR - Failed to convert node 'loop_body_1/NonMaxSuppressionV5/pfor/while/NonMaxSuppressionV5' (fct=<bound method NonMaxSuppression.version_11 of <class 'tf2onnx.onnx_opset.tensor.NonMaxSuppression'>>) 'OP=NonMaxSuppressionV5\nName=loop_body_1/NonMaxSuppressionV5/pfor/while/NonMaxSuppressionV5\nInputs:\n\tUnsqueeze103:0=Unsqueeze, [1, -1, 4], 1\n\tUnsqueeze105:0=Unsqueeze, [1, 1, -1], 1\n\tloop_body_1/NonMaxSuppressionV5/pfor/while/NonMaxSuppressionV5107:0=Cast, [], 7\n\tloop_body_1_nonmaxsuppressionv5_pfor_while_loop_body_1_nonmaxsuppressionv5_iou_threshold_0:0=Placeholder, [], 1\n\tloop_body_1_nonmaxsuppressionv5_pfor_while_loop_body_1_nonmaxsuppressionv5_score_threshold_0:0=Placeholder, [], 1\n\tloop_body_1_nonmaxsuppressionv5_pfor_while_loop_body_1_nonmaxsuppressionv5_soft_nms_sigma_0:0=Placeholder, [], 1\nOutpus:\n\tloop_body_1/NonMaxSuppressionV5/pfor/while/NonMaxSuppressionV5:0=[-1], 6\nOutpus:\n\tloop_body_1/NonMaxSuppressionV5/pfor/while/NonMaxSuppressionV5:1=[-1], 1\nOutpus:\n\tloop_body_1/NonMaxSuppressionV5/pfor/while/NonMaxSuppressionV5:2=[], 6' Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tfonnx.py", line 291, in tensorflow_onnx_mapping func(g, node, kwargs, initialized_tables=initialized_tables, dequantize=dequantize) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/onnx_opset/tensor.py", line 1841, in version_11 cls.any_version(11, ctx, node, kwargs) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/onnx_opset/tensor.py", line 1770, in any_version utils.make_sure(len(node.inputs) <= 5 or int(node.inputs[5].get_tensor_value(False)) == 0, File "/usr/local/lib/python3.6/dist-packages/tf2onnx/graph.py", line 314, in get_tensor_value raise ValueError("get tensor value: '{}' must be Const".format(self.name)) ValueError: get tensor value: 'loop_body_1_nonmaxsuppressionv5_pfor_while_loop_body_1_nonmaxsuppressionv5_soft_nms_sigma_0' must be Const Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py", line 605, in main() File "/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py", line 265, in main output_path=args.output) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py", line 154, in _convert_common g = process_tf_graph(tf_graph, const_node_values=const_node_values, kwargs) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tfonnx.py", line 434, in process_tf_graph initialized_tables, tensors_to_rename, is_tflite, dequantize) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tfonnx.py", line 483, in process_graphs initialized_tables, is_tflite, dequantize) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tfonnx.py", line 592, in process_parsed_graph raise exceptions[0] File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tfonnx.py", line 291, in tensorflow_onnx_mapping func(g, node, kwargs, initialized_tables=initialized_tables, dequantize=dequantize) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/onnx_opset/tensor.py", line 1841, in version_11 cls.any_version(11, ctx, node, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tf2onnx/onnx_opset/tensor.py", line 1770, in any_version utils.make_sure(len(node.inputs) <= 5 or int(node.inputs[5].get_tensor_value(False)) == 0, File "/usr/local/lib/python3.6/dist-packages/tf2onnx/graph.py", line 314, in get_tensor_value raise ValueError("get tensor value: '{}' must be Const".format(self.name)) ValueError: get tensor value: 'loop_body_1_nonmaxsuppressionv5_pfor_while_loop_body_1_nonmaxsuppressionv5_soft_nms_sigma_0' must be Const

  1. Used model : https://tfhub.dev/tensorflow/efficientdet/lite1/detection/1

  2. Environment Cython 0.29.23 numpy 1.19.1 onnx 1.7.0 onnxruntime-gpu 1.5.2 pycuda 2019.1.2 tensorflow-gpu 2.2.0 tf2onnx 1.10.0 torch 1.7.1 torch2trt 0.2.0

  3. Used Command python3 -m tf2onnx.convert --saved-model ./ --opset 12 --output model.onnx

I did only the basics, but an error occurred and I can't figure it out. Any help would be appreciated.

TomWildenhain-Microsoft commented 3 years ago

Took a look at the model and found that NonMaxSuppression op in the model has a soft_nms_sigma value of 0.25, which is a problem since ONNX only supports hard NMS:

It may be possible to convert the model if we use hard NMS instead, but the result product slightly different results from the original. The model also has a StatelessWhile + TensorListConcat sequence that we haven't seen before, but should be easy to convert to the TensorListStack op that we can deal with correctly.

@guschmue do you think falling back to hard NMS in these cases and printing the warning would be OK, or would it be better to propose an ONNX change/onnxruntime-extension?

guschmue commented 3 years ago

let me take a look at the research paper. I can also take a look at the source how the lite version differs compared to the regular efficientdet.

sniperHJJ commented 3 years ago

@TomWildenhain-Microsoft Thank you for your reply.

TomWildenhain-Microsoft commented 3 years ago

@JunHyunjae we can leave this issue open if you are still interested in converting the model. Sometimes new ops are added to onnx that make previously difficult to convert models possible. What is your primary motivation for converting to onnx?

guschmue commented 3 years ago

I guess the proper thing is if we open an issue for onnx to support soft_nms in a future opset. We could temporarily just use hard_nms and would lose ~1mAP for efficientdet-lite1 ... not great but better than not working. And we'd need to find some way to do the TensorList ops.