Closed amamory closed 4 years ago
Sorry for the late response. Can you confirm load_model is imported from tf.keras?
Thanks for the reply. Yes, it is generated with tf.keras. In fact the entire notebook is linked. So you can easily check what I did, which was virtually nothing. I am just trying to load and hdf5 generated by the tutorial.
It needs a permission to access your notebook.
In tf.keras, it usually requires run model(input) or model.predict(input) before the conversion. is it the possible reason?
ops, sorry. Now I fixed the access issue. Yes, it does a single prediction before the onnx conversion.
@amamory , the reason caused this weird issue is keras has some global variable and re-load the model and the variables changes.
So adding tensorflow.keras.backend.clear_session() before the load_model will help to fix this issue.
thank you very much ! i am closing the issue.
@wenbingl i am also facing issues converting the efficientnet model. Here is my code:
import numpy as np
import efficientnet.tfkeras as efn
from tensorflow.keras.applications.imagenet_utils import decode_predictions, preprocess_input
from efficientnet.preprocessing import center_crop_and_resize
from skimage.io import imread
from tensorflow.keras.models import load_model
import tensorflow as tf
import keras2onnx
print("TensorFlow version is "+tf.__version__)
print("keras2onnx version is "+keras2onnx.__version__)
tf.keras.backend.clear_session()
model = efn.EfficientNetB0(weights='imagenet')
image =imread('/Users/l0stpenguin/Downloads/panda.jpg')
image_size = model.input_shape[1]
x = center_crop_and_resize(image, image_size=image_size)
x = preprocess_input(x, mode='torch')
inputs = np.expand_dims(x, 0)
expected = model.predict(inputs)
decode_predictions(expected)
output_model_path = "keras_efficientNet.onnx"
onnx_model = keras2onnx.convert_keras(model, model.name)
onnx.save_model(onnx_model, output_model_path)
Here is the error:
from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
2020-05-29 19:40:45.172019: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-29 19:40:45.236340: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fb722396d00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-05-29 19:40:45.236395: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
/Users/l0stpenguin/Library/Python/3.7/lib/python/site-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
/Users/l0stpenguin/Library/Python/3.7/lib/python/site-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/subgraph.py:156: tensor_shape_from_node_def_name (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.tensor_shape_from_node_def_name`
Exception on this tf.graph
TFNodes8/block2b_drop/cond/dropout/truediv/x
TFNodes8/block2b_drop/cond/dropout/rate
TFNodes8/block2b_drop/cond/dropout/sub/x
TFNodes8/block2b_drop/cond/dropout/sub
TFNodes8/block2b_drop/cond/dropout/truediv
TFNodes8/block2b_drop/cond/dropout/random_uniform/min
TFNodes8/block2b_drop/cond/dropout/random_uniform/max
TFNodes8/block2b_drop/cond/dropout/random_uniform/sub
TFNodes8/block2b_drop/cond/dropout/random_uniform/shape/1
TFNodes8/block2b_drop/cond/dropout/random_uniform/shape/2
TFNodes8/block2b_drop/cond/dropout/random_uniform/shape/3
TFNodes8/block2b_drop/cond/strided_slice/stack
TFNodes8/block2b_drop/cond/strided_slice/stack_1
TFNodes8/block2b_drop/cond/strided_slice/stack_2
TFNodes8/keras_learning_phase
TFNodes8/block2b_drop/cond/pred_id
TFNodes8/block2b_project_bn/cond/Merge
TFNodes8/block2b_drop/cond/Identity/Switch
TFNodes8/block2b_drop/cond/Identity
TFNodes8/block2b_drop/cond/Shape/Switch
TFNodes8/block2b_drop/cond/dropout/mul
TFNodes8/block2b_drop/cond/Shape
TFNodes8/block2b_drop/cond/strided_slice
TFNodes8/block2b_drop/cond/dropout/random_uniform/shape
TFNodes8/block2b_drop/cond/dropout/random_uniform/RandomUniform
TFNodes8/block2b_drop/cond/dropout/random_uniform/mul
TFNodes8/block2b_drop/cond/dropout/random_uniform
TFNodes8/block2b_drop/cond/dropout/GreaterEqual
TFNodes8/block2b_drop/cond/dropout/Cast
TFNodes8/block2b_drop/cond/dropout/mul_1
TFNodes8/block2b_drop/cond/Merge
TFNodes8_identity
TFNodes8_identity_1
Traceback (most recent call last):
File "onnx_export.py", line 25, in <module>
onnx_model = keras2onnx.convert_keras(model, model.name)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/main.py", line 92, in convert_keras
parse_graph(topology, tf_graph, target_opset, output_names)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/parser.py", line 828, in parse_graph
return _parse_graph_core(graph, keras_layer_ts_map, topo, top_level, output_names)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/parser.py", line 756, in _parse_graph_core
_infer_graph_shape(topology, top_scope, varset)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/parser.py", line 479, in _infer_graph_shape
_finalize_tf2onnx_op(topology, oop, varset)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/parser.py", line 449, in _finalize_tf2onnx_op
g = tf2onnx_wrap(topo, subgraph, outputs, varset.target_opset)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/wrapper.py", line 357, in tf2onnx_wrap
raise e
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/wrapper.py", line 351, in tf2onnx_wrap
output_names=outputs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/ktf2onnx/tf2onnx/tfonnx.py", line 569, in process_tf_graph
topological_sort(g, continue_on_error)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/ktf2onnx/tf2onnx/tfonnx.py", line 407, in topological_sort
g.topological_sort(ops)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/ktf2onnx/tf2onnx/graph.py", line 835, in topological_sort
utils.make_sure(j is not None, "Cannot find node with output {}".format(inp))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras2onnx/ktf2onnx/tf2onnx/utils.py", line 290, in make_sure
raise ValueError("make_sure failure: " + error_msg % args)
ValueError: make_sure failure: Cannot find node with output TFNodes8/block2b_drop/cond/Merge:1
TensorFlow version is 1.15.0 keras2onnx version is 1.6.1
How can i fix this?
@l0stpenguin , please upgrade your converter to the latest code in the master branch. please check the project README for the installation guide.
I was trying to convert unet(efficient backbone) to onnx. Before i add tensorflow.keras.backend.clear_session(), AssertionError: stem_bn/keras_learning_phase:0 is disconnected. After i add add tensorflow.keras.backend.clear_session(), the following error appears:
File "c.py", line 12, in <module>
model = sm.Unet(backbone_name='efficientnetb7', classes=3,activation='softmax',weights='model_purchase_190.h5')
File "/usr/local/lib/python3.6/dist-packages/segmentation_models/__init__.py", line 34, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/segmentation_models/models/unet.py", line 250, in Unet
model.load_weights(weights)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/network.py", line 1180, in load_weights
f, self.layers, reshape=reshape)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 929, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 2430, in batch_set_value
assign_op = x.assign(assign_placeholder)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 1762, in assign
name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/state_ops.py", line 223, in assign
validate_shape=validate_shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 64, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 350, in _apply_op_helper
g = ops._get_graph_from_inputs(_Flatten(keywords.values()))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 5713, in _get_graph_from_inputs
_assert_same_graph(original_graph_element, graph_element)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 5649, in _assert_same_graph
original_item))
ValueError: Tensor("Placeholder:0", shape=(3, 3, 3904, 256), dtype=float32) must be from the same graph as Tensor("decoder_stage0a_conv/kernel:0", shape=(3, 3, 3904, 256), dtype=float32_ref).
my code:
import segmentation_models as sm
import keras
from keras2onnx import convert_keras
#from engine import *
onnx_path = 'unet.onnx'
engine_name = 'unet.plan'
batch_size = 1
CHANNEL = 3
HEIGHT = 416
WIDTH = 416
model = sm.Unet(backbone_name='efficientnetb7', classes=3,activation='softmax',weights='model_purchase_190.h5')
tensorflow.keras.backend.clear_session()
model._layers[0].batch_input_shape = (None, 416,416,3)
model = keras.models.clone_model(model)
onx = convert_keras(model, onnx_path)
with open(onnx_path, "wb") as f:
f.write(onx.SerializeToString())
version info:
python 3.6.9 TensorFlow version is 1.13 keras2onnx version is 1.7.0 segmentation_models How can i fix it?
I was trying to convert unet(efficient backbone) to onnx. Before i add tensorflow.keras.backend.clear_session(), AssertionError: stem_bn/keras_learning_phase:0 is disconnected. After i add add tensorflow.keras.backend.clear_session(), the following error appears:
File "c.py", line 12, in <module> model = sm.Unet(backbone_name='efficientnetb7', classes=3,activation='softmax',weights='model_purchase_190.h5') File "/usr/local/lib/python3.6/dist-packages/segmentation_models/__init__.py", line 34, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/segmentation_models/models/unet.py", line 250, in Unet model.load_weights(weights) File "/usr/local/lib/python3.6/dist-packages/keras/engine/network.py", line 1180, in load_weights f, self.layers, reshape=reshape) File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 929, in load_weights_from_hdf5_group K.batch_set_value(weight_value_tuples) File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 2430, in batch_set_value assign_op = x.assign(assign_placeholder) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 1762, in assign name=name) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/state_ops.py", line 223, in assign validate_shape=validate_shape) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 64, in assign use_locking=use_locking, name=name) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 350, in _apply_op_helper g = ops._get_graph_from_inputs(_Flatten(keywords.values())) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 5713, in _get_graph_from_inputs _assert_same_graph(original_graph_element, graph_element) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 5649, in _assert_same_graph original_item)) ValueError: Tensor("Placeholder:0", shape=(3, 3, 3904, 256), dtype=float32) must be from the same graph as Tensor("decoder_stage0a_conv/kernel:0", shape=(3, 3, 3904, 256), dtype=float32_ref).
my code:
import segmentation_models as sm import keras from keras2onnx import convert_keras #from engine import * onnx_path = 'unet.onnx' engine_name = 'unet.plan' batch_size = 1 CHANNEL = 3 HEIGHT = 416 WIDTH = 416 model = sm.Unet(backbone_name='efficientnetb7', classes=3,activation='softmax',weights='model_purchase_190.h5') tensorflow.keras.backend.clear_session() model._layers[0].batch_input_shape = (None, 416,416,3) model = keras.models.clone_model(model) onx = convert_keras(model, onnx_path) with open(onnx_path, "wb") as f: f.write(onx.SerializeToString())
version info:
python 3.6.9 TensorFlow version is 1.13 keras2onnx version is 1.7.0 segmentation_models How can i fix it?
trying tf.keras.backend.set_learning_phase(0) at the beginning?
It doesn't work
Just for anyone that still has facing this issue. I had the same problem with efficientnet when I install from pip, them I just update from source pip install -U git+https://github.com/microsoft/onnxconverter-common pip install -U git+https://github.com/onnx/keras-onnx
and it works.
Hey, I also am experiencing the same problems. I cloned the repos as @Gabrielllopes suggested
tf executing eager_mode: True
tf.keras model eager_mode: False
[3.4436421394348145, 0.10566037893295288]
WARN: No corresponding ONNX op matches the tf.op node normalization_3/Reshape/ReadVariableOp/resource of type Placeholder
The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node normalization_3/Reshape_1/ReadVariableOp/resource of type Placeholder
The generated ONNX model needs run with the custom op supports.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-43-99757befbea5> in <module>
143 model_text_reduced = Sequential()
144
--> 145 vision_model_ENET = load_vision_model(a, dataset, classes, resolution, train_ds, val_ds)
146
147 # Load the specified vision model in 'a'
<ipython-input-26-8a061c48bb8a> in load_vision_model(model, dataset, classes, resolution, train_ds, val_ds)
1 def load_vision_model(model,dataset, classes, resolution, train_ds, val_ds):
----> 2 return comp(model, dataset, classes, resolution, train_ds, val_ds)
<ipython-input-18-53d4c1603f3c> in comp(model, dataset, classes, resolution, train_ds, val_ds)
1 def comp(model, dataset, classes, resolution, train_ds, val_ds):
2 #visualize_set(train_original_xy, test_original_xy, val_original_xy)
----> 3 result_for, histories = execute_for_dataset(model, dataset, train_ds, val_ds, resolution, classes)
4 return result_for
<ipython-input-42-95bbd18ff0a1> in execute_for_dataset(model, dataset, train_ds, val_ds, resolution, classes)
3
4 def execute_for_dataset( model, dataset, train_ds, val_ds, resolution, classes):
----> 5 result_for, histories = run(model, dataset, params_224_imageNet, train_ds, val_ds, resolution, classes)
6 return result_for, histories
7
<ipython-input-42-95bbd18ff0a1> in run(model, dataset, params_dict, train_ds, val_ds, resolution, classes)
90 )
91 print(model.evaluate(val_ds,verbose = 1))
---> 92 onnx_model = keras2onnx.convert_keras(model, model.name)
93 temp_model_file = 'vision_model.onnx'
94 onnx.save_model(onnx_model, temp_model_file)
~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/keras2onnx/main.py in convert_keras(model, name, doc_string, target_opset, initial_types, channel_first_inputs, debug_mode, custom_op_conversions)
97 parse_graph_modeless(topology, tf_graph, target_opset, input_names, output_names, output_dict)
98 else:
---> 99 parse_graph(topology, tf_graph, target_opset, output_names, output_dict)
100 topology.compile()
101
~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/keras2onnx/parser.py in parse_graph(topo, graph, target_opset, output_names, keras_node_dict)
905 return _parse_graph_core_v2(
906 graph, keras_node_dict, topo, top_level, output_names
--> 907 ) if is_tf2 and is_tf_keras else _parse_graph_core(
908 graph, keras_node_dict, topo, top_level, output_names)
~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/keras2onnx/parser.py in _parse_graph_core_v2(graph, keras_node_dict, topology, top_scope, output_names)
782 elif layer_info.layer is None or get_converter(type(layer_info.layer)) is None or \
783 (isinstance(layer_info.layer, keras.layers.core.Activation) and not activation_supported):
--> 784 _on_parsing_tf_nodes(graph, layer_info.nodelist, varset, topology.debug_mode)
785 else:
786 on_parsing_keras_layer_v2(graph, layer_info, varset)
~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/keras2onnx/parser.py in _on_parsing_tf_nodes(graph, nodelist, varset, debug_mode)
321 oname = o_.name
322 k2o_logger().debug('\toutput: ' + oname)
--> 323 out0 = varset.get_local_variable_or_declare_one(oname, infer_variable_type(o_, varset.target_opset))
324 operator.add_output(out0)
325
~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/keras2onnx/_parser_tf.py in infer_variable_type(tensor, opset, inbound_node_shape)
42 else:
43 raise ValueError(
---> 44 "Unable to find out a correct type for tensor type = {} of {}".format(tensor_type, tensor.name))
45
46
ValueError: Unable to find out a correct type for tensor type = 20 of normalization_3/Reshape/ReadVariableOp/resource:0
@ch-hristov same here
TensorFlow version is 2.3.1
keras2onnx version is 1.8.0
WARN: No corresponding ONNX op matches the tf.op node normalization/Reshape/ReadVariableOp/resource of type Placeholder The generated ONNX model needs run with the custom op supports. WARN: No corresponding ONNX op matches the tf.op node normalization/Reshape_1/ReadVariableOp/resource of type Placeholder The generated ONNX model needs run with the custom op supports. Traceback (most recent call last): File "convert_onnx.py", line 15, in <module> onnx_model = keras2onnx.convert_keras(model, model.name) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/main.py", line 99, in convert_keras parse_graph(topology, tf_graph, target_opset, output_names, output_dict) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/parser.py", line 907, in parse_graph ) if is_tf2 and is_tf_keras else _parse_graph_core( File "/usr/local/lib/python3.6/dist-packages/keras2onnx/parser.py", line 784, in _parse_graph_core_v2 _on_parsing_tf_nodes(graph, layer_info.nodelist, varset, topology.debug_mode) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/parser.py", line 328, in _on_parsing_tf_nodes var_type = infer_variable_type(i_, varset.target_opset) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/_parser_tf.py", line 44, in infer_variable_type "Unable to find out a correct type for tensor type = {} of {}".format(tensor_type, tensor.name)) ValueError: Unable to find out a correct type for tensor type = 20 of normalization/Reshape/ReadVariableOp/resource:0
Just for anyone that still has facing this issue. I had the same problem with efficientnet when I install from pip, them I just update from source
pip install -U git+https://github.com/microsoft/onnxconverter-common pip install -U git+https://github.com/onnx/keras-onnx
and it works.
@Gabrielllopes Which version of tensorflow, onnx, keras-onnx did you use?
Ok I made a very small test code
from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.models import load_model
import tensorflow as tf
import keras2onnx
import onnx
print("TensorFlow version is "+tf.__version__)
print("keras2onnx version is "+keras2onnx.__version__)
tf.keras.backend.clear_session()
model = EfficientNetB0(weights='imagenet')
output_model_path = "keras_efficientNet.onnx"
onnx_model = keras2onnx.convert_keras(model, model.name)
onnx.save_model(onnx_model, output_model_path)
This is throwing the following error:
2021-02-08 18:43:15.094654: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 TensorFlow version is 2.3.1 keras2onnx version is 1.7.0 2021-02-08 18:43:16.751709: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1 2021-02-08 18:43:16.760119: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2021-02-08 18:43:16.760164: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: baseNgctf2Evotegra2Opencv4 2021-02-08 18:43:16.760172: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: baseNgctf2Evotegra2Opencv4 2021-02-08 18:43:16.760268: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 455.32.0 2021-02-08 18:43:16.760300: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 455.32.0 2021-02-08 18:43:16.760307: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 455.32.0 2021-02-08 18:43:16.823842: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3499925000 Hz 2021-02-08 18:43:16.860991: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5503cc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-02-08 18:43:16.861011: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version tf executing eager_mode: True tf.keras model eager_mode: False WARN: No corresponding ONNX op matches the tf.op node normalization/Reshape/ReadVariableOp/resource of type Placeholder The generated ONNX model needs run with the custom op supports. WARN: No corresponding ONNX op matches the tf.op node normalization/Reshape_1/ReadVariableOp/resource of type Placeholder The generated ONNX model needs run with the custom op supports. Traceback (most recent call last): File "convert_onnx.py", line 14, in <module> onnx_model = keras2onnx.convert_keras(model, model.name) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/main.py", line 80, in convert_keras parse_graph(topology, tf_graph, target_opset, output_names, output_dict) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/parser.py", line 841, in parse_graph ) if is_tf2 and is_tf_keras else _parse_graph_core( File "/usr/local/lib/python3.6/dist-packages/keras2onnx/parser.py", line 729, in _parse_graph_core_v2 _on_parsing_tf_nodes(graph, layer_info.nodelist, varset, topology.debug_mode) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/parser.py", line 318, in _on_parsing_tf_nodes out0 = varset.get_local_variable_or_declare_one(oname, infer_variable_type(o_, varset.target_opset)) File "/usr/local/lib/python3.6/dist-packages/keras2onnx/_parser_tf.py", line 48, in infer_variable_type "Unable to find out a correct type for tensor type = {} of {}".format(tensor_type, tensor.name)) ValueError: Unable to find out a correct type for tensor type = 20 of normalization/Reshape/ReadVariableOp/resource:0
Does anyone have an idea why this is happening, or how to avoid it?
thank you very much ! i am closing the issue.
AssertionError: input_1:01 is disconnected, check the parsing log for more details. hey @amamory did this worked for you? I am getting the same error as you and adding tensorflow.keras.backend.clear_session() didnt work. Any ideas? Thank you
Hi @bnascimento, i am still getting this error (have tried alle the fixes mentioned above).
AssertionError: input_1:01 is disconnected, check the parsing log for more details.
Did you or someone else here manage to fix this or found a workaround? Thanks you very much in advance!
Hi @EyGy @bnascimento @ch-hristov
The way is converting the model via pb (SavedModel format).
1) Convert model to pb format https://www.tensorflow.org/guide/saved_model 2) Convert pb model to onnx with tf2onnx converter https://github.com/onnx/tensorflow-onnx
With this approach I converted PSPnet model with efficientnet_b4 backbone (Segmentation model from https://github.com/qubvel/segmentation_models)
Hope this will help you. All the best.
Thank you very much for your help. Meanwhile i found and used the same workaround as well (convert saved_model to .pb -> convert .pb to onnx).
I feel like it is kind of disproportional difficult to convert tf 2.x models into .pb files. Workflow that finally did the job for me:
@bulatnv did you use tf.2.x ? And if so, how did you manage to get your efficientnet into .pb format? I feel like there has to be a more convenient way...
While this workaround seems to do the job, it would be interesting to find out whether the AssertionError is a keras-onnx bug specifically for efficientnet architectures and how to fix it. So maybe we should reopen the issue since it is the example model and a lot of people seem to struggle with this.
did you use tf.2.x ? And if so, how did you manage to get your efficientnet into .pb format? I feel like there has to be a more convenient way...
Hi @EyGy I use tf==2.3.1
Well for saving model to .pb format I have used saved_model* command https://www.tensorflow.org/guide/saved_model
from tensorflow.keras.models import load_model
import tensorflow as tf
keras_model_path = 'pspnet-033-0.1072.h5'
keras_model = load_model(keras_model_path)
keras_model.summary()
input_names = [n.name for n in keras_model.inputs]
output_names = [n.name for n in keras_model.outputs]
print('inputs:', input_names)
print('outputs:', output_names)
tf.saved_model.save(keras_model, 'pspnet')
Thanks for your reply. I do know about the saved_model format, which however is not the same as a frozen graph with variables as constants saved as *.pb (process i described above). I got the original error
AssertionError: input_1:01 is disconnected, check the parsing log for more details.
using the saved_model format you described. For me the workaround of freezing the graph did the job, which is not really convenient in tf 2+. Anyways, I appreciate your help!
I was trying the tutorial notebook and I did small modifications. See my notebook here.
1) after the existing model hdf5 file is loaded, and saved the model again with a different name
and run it until the end. No errors as expected.
2) then in the second run I replaced the original load model command by keras' load model
Then, while converting to onnx, it issued this error:
version info:
It seems that keras2onnx has issues to load hdf5 saved by tf.keras, or at least for this specific. version
any thought ?
Alexandre