tensorlayer / HyperPose

Library for Fast and Flexible Human Pose Estimation
https://hyperpose.readthedocs.io
1.25k stars 275 forks source link

Exporting model to ONNX #292

Closed salvador-blanco closed 4 years ago

salvador-blanco commented 4 years ago

Hi thank you very much for the hard work.

I have been trying to export a model, I have been following hyper-pose documentation Link I was able to export my model to .pb :)

But I get stock getting the input and output nodes of your .pb --inputs input0:0,input1:0 --outputs output0:0

I figured out I should install blaze and then compile TensorFlow with blaze. Once TensorFlow is compiled I should be able to run summarize_graph --in_graph=your_frozen_model.pb ?

To compile TensorFlow I tried Link But when I run ./configure it gives me so many options I seriously don't know what I am doing. and then when I try to bazel build tensorflow/tools/graph_transforms:summarize_graph I get: ERROR: Config value cuda is not defined in any .rc file

Is there an easier way to get the input and output nodes ? I spend like 8 hours trying to get them with no luck

I would appreciate any help thank you :)

salvador-blanco commented 4 years ago

According to tensorflow-onnx if the model is saved using saved-model format the input outputs is not necessary. Is it possible to do that ?

salvador-blanco commented 4 years ago

I was able to compile TensorFlow and then run summarize_graph --in_graph=your_frozen_model.pb In my first attempt I was trying to compile TensorFlow 1.8 with no luck. With TensorFlow 2.0 and updating bazel I succeed.

Still I am not sure how to format the data --inputs input0:0,input1:0 --outputs output0:0

Here is my output from running graph_transforms:summarize_graph:

bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/home/chava/Kyutech/hyperpose/save_dir/chava_coco/frozen_chava_coco.pb
Found 1 possible inputs: (name=x, type=float(1), shape=[?,3,368,432]) 
No variables spotted.
Found 2 possible outputs: (name=Identity, op=Identity) (name=Identity_1, op=Identity) 
Found 7905756 (7.91M) const parameters, 0 (0) variable parameters, and 1000 control_edges
Op types used: 355 Identity, 300 Const, 100 Reshape, 75 Mul, 43 Conv2D, 36 Relu, 35 AddV2, 31 BiasAdd, 25 Add, 25 Rsqrt, 25 Sub, 2 NoOp, 1 ConcatV2, 1 MaxPool, 1 Placeholder
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/chava/Kyutech/hyperpose/save_dir/chava_coco/frozen_chava_coco.pb --show_flops --input_layer=x --input_layer_type=float --input_layer_shape=-1,3,368,432 --output_layer=Identity,Identity_1

?

Gyx-One commented 4 years ago

@salvador-blanco Very glad to see you get the summarize_graph work! indeed it's not so convenient to install it. 1. Is it possible to export model as the saved-model format? It seems that only using the keras high level APIs to train in tensorflow would be able to export as the saved_model format, but we are using tensorlayer now,I will try if it is possible to make it compatible to export saved-model format in tensorlayer.(I'm not sure) 2. how to format data --inputs input0:0,input1:0 --outputs output0:0? you already arrived at the last step! the format should be nodename:0. from your output you can see that your input node is x, and your output nodes are Identity and Identity_1, so the format is below: --inputs x:0 --outputs Identity:0,Identity_1:0

If you have any other problem, please contact me!

Gyx-One commented 4 years ago

document over exportion modified to be more specific.

salvador-blanco commented 4 years ago

Hello I get the error when using:

python -m tf2onnx.convert --graphdef /home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.pb --output /home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.onnx --inputs x:0 --outputs Identity:0,Identity_1:0

The output of bazel-bin/tensorflow/tools/graph_transforms/summarize_graph is ( LightweightOpenpose with default backbone)

n_graph=/home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.pb
Found 1 possible inputs: (name=x, type=float(1), shape=[?,3,368,432]) 
No variables spotted.
Found 2 possible outputs: (name=Identity, op=Identity) (name=Identity_1, op=Identity) 
Found 4622281 (4.62M) const parameters, 0 (0) variable parameters, and 1339 control_edges
Op types used: 467 Identity, 415 Const, 144 Reshape, 108 Mul, 50 Relu, 43 Conv2D, 42 AddV2, 36 Add, 36 Rsqrt, 36 Sub, 32 BiasAdd, 11 DepthwiseConv2dNative, 2 NoOp, 1 ConcatV2, 1 Placeholder, 1 BatchToSpaceND, 1 SpaceToBatchND
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.pb --show_flops --input_layer=x --input_layer_type=float --input_layer_shape=-1,3,368,432 --output_layer=Identity,Identity_1

The error I get is:

python -m tf2onnx.convert --graphdef /home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.pb --output /home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.onnx --inputs x:0 --outputs Identity:0,Identity_1:0
WARNING:tensorflow:From /home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/tf_loader.py:122: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
2020-10-06 12:21:04,216 - WARNING - From /home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/tf_loader.py:122: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
WARNING:tensorflow:From /home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tensorflow_core/python/framework/graph_util_impl.py:275: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
2020-10-06 12:21:04,217 - WARNING - From /home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tensorflow_core/python/framework/graph_util_impl.py:275: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
INFO:tensorflow:Froze 0 variables.
2020-10-06 12:21:04,248 - INFO - Froze 0 variables.
INFO:tensorflow:Converted 0 variables to const ops.
2020-10-06 12:21:04,268 - INFO - Converted 0 variables to const ops.
2020-10-06 12:21:04,730 - INFO - Using tensorflow=2.0.0, onnx=1.7.0, tf2onnx=1.6.2/8d5253
2020-10-06 12:21:04,730 - INFO - Using opset <onnx, 8>
2020-10-06 12:21:07,529 - ERROR - Failed to convert node StatefulPartitionedCall/depthwiseconv2d_7/SpaceToBatchND
OP=SpaceToDepth
Name=StatefulPartitionedCall/depthwiseconv2d_7/SpaceToBatchND
Inputs:
    StatefulPartitionedCall/Relu_12:0=Relu, [-1, 512, 46, 54], 1
    StatefulPartitionedCall/depthwiseconv2d_7/SpaceToBatchND/block_shape:0=Const, [3], 6
    StatefulPartitionedCall/depthwiseconv2d_7/SpaceToBatchND/paddings:0=Const, [3, 2], 6
Outpus:
    StatefulPartitionedCall/depthwiseconv2d_7/SpaceToBatchND:0=[-1, 512, 25, 29], 1
Traceback (most recent call last):
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/tfonnx.py", line 256, in tensorflow_onnx_mapping
    func(g, node, **kwargs)
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/onnx_opset/tensor.py", line 1389, in version_1
    utils.make_sure(ctx.opset >= 11, 'non-4D tensor or non-const pads require opset 11')
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/utils.py", line 193, in make_sure
    raise ValueError("make_sure failure: " + error_msg % args)
ValueError: make_sure failure: non-4D tensor or non-const pads require opset 11
2020-10-06 12:21:07,530 - ERROR - Failed to convert node StatefulPartitionedCall/depthwiseconv2d_7/BatchToSpaceND
OP=DepthToSpace
Name=StatefulPartitionedCall/depthwiseconv2d_7/BatchToSpaceND
Inputs:
    StatefulPartitionedCall/depthwiseconv2d_7:0=Conv, [-1, 512, 23, 27], 1
    StatefulPartitionedCall/depthwiseconv2d_7/BatchToSpaceND/block_shape:0=Const, [3], 6
    StatefulPartitionedCall/depthwiseconv2d_7/BatchToSpaceND/crops:0=Const, [3, 2], 6
Outpus:
    StatefulPartitionedCall/depthwiseconv2d_7/BatchToSpaceND:0=[-1, 512, 46, 54], 1
Traceback (most recent call last):
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/tfonnx.py", line 256, in tensorflow_onnx_mapping
    func(g, node, **kwargs)
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/onnx_opset/tensor.py", line 1268, in version_1
    utils.make_sure(ctx.opset >= 11, 'non-4D tensor or non-const crops require opset 11')
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/utils.py", line 193, in make_sure
    raise ValueError("make_sure failure: " + error_msg % args)
ValueError: make_sure failure: non-4D tensor or non-const crops require opset 11
Traceback (most recent call last):
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/convert.py", line 169, in <module>
    main()
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/convert.py", line 153, in main
    inputs_as_nchw=args.inputs_as_nchw)
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/tfonnx.py", line 475, in process_tf_graph
    raise exceptions[0]
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/tfonnx.py", line 256, in tensorflow_onnx_mapping
    func(g, node, **kwargs)
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/onnx_opset/tensor.py", line 1389, in version_1
    utils.make_sure(ctx.opset >= 11, 'non-4D tensor or non-const pads require opset 11')
  File "/home/chava/Kyutech/anaconda3/envs/hyperpose/lib/python3.7/site-packages/tf2onnx/utils.py", line 193, in make_sure
    raise ValueError("make_sure failure: " + error_msg % args)
ValueError: make_sure failure: non-4D tensor or non-const pads require opset 11

I would appreciate very much your help Thanks

Gyx-One commented 4 years ago

Hello! @salvador-blanco Simply add --opset 11 to the python -m tf2onnx.convert statement is engough :) The different opset levels of tf2onnx support different set of operators that in nerual networks which could be converted. As said in the error log, the conversion of opset is at least 11 for this network. So, in your case, the statement is to be python -m tf2onnx.convert --graphdef /home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.pb --output /home/chava/Kyutech/hyperpose/save_dir/Human_OP_Def/frozen_Human_OP_Def.onnx --inputs x:0 --outputs Identity:0,Identity_1:0 --opset 11 Welcome to ask furthuer question :)

salvador-blanco commented 4 years ago

Thanks a lot, it worked :)