onnx / keras-onnx

Convert tf.keras/Keras models to ONNX
Apache License 2.0
379 stars 110 forks source link

convert_keras fails: "Cannot find the Placeholder op that is an input to the ReadVariableOp" #565

Open CJCombrink opened 3 years ago

CJCombrink commented 3 years ago

I am trying to convert a Keras model to onnx and get the above error.

I trained a model using this GitHub repo: https://github.com/fbadine/LSTNet. The result is a json and h5 file. I then load the model and when I try to convert it I get the error:

File "convert_keras.py", line 28, in <module>
    onnx_model = keras2onnx.convert_keras(model, model.name, target_opset=7, debug_mode=True)
  File "C:\Dev\PythonVirtEnv\LSTNet\env\lib\site-packages\keras2onnx\main.py", line 82, in convert_keras
    tf_graph = build_layer_output_from_model(model, output_dict, input_names,
  File "C:\Dev\PythonVirtEnv\LSTNet\env\lib\site-packages\keras2onnx\_parser_tf.py", line 302, in build_layer_output_from_model
    return extract_outputs_from_subclassing_model(model, output_dict, input_names, output_names, input_specs)
  File "C:\Dev\PythonVirtEnv\LSTNet\env\lib\site-packages\keras2onnx\_parser_tf.py", line 263, in extract_outputs_from_subclassing_model
    graph_def, converted_input_indices = _convert_to_constants(
  File "C:\Dev\PythonVirtEnv\LSTNet\env\lib\site-packages\keras2onnx\_graph_cvt.py", line 506, in convert_variables_to_constants_v2
    raise ValueError("Cannot find the Placeholder op that is an input "
ValueError: Cannot find the Placeholder op that is an input to the ReadVariableOp.

Steps to reproduce (from the linked github page, under Installation):

  1. Follow the "Installation" instructions on the linked GitHub page
  2. Train the electricity model: ./electricity.sh
  3. Load the model (in Python)

    import onnx
    import keras2onnx
    
    from LSTNet.lstnet_model import PreSkipTrans, PostSkipTrans, PreARTrans, PostARTrans
    from util.model_util import LoadModel
    
    onnx_model_name = 'electricity.onnx' 
    keras_model_name = 'C:/Dev/PythonVirtEnv/LSTNet/save/electricity'
    
    custom_objects = {
            'PreSkipTrans': PreSkipTrans,
            'PostSkipTrans': PostSkipTrans,
            'PreARTrans': PreARTrans,
            'PostARTrans': PostARTrans
            }
    
    model = LoadModel(keras_model_name, custom_objects)
    onnx_model = keras2onnx.convert_keras(model, model.name, target_opset=7, debug_mode=True)

I did see the following issue might be related: #528 My concern/issue with the above implementation is that it might require a change in the exporting code from my understanding.

The problem that I am working on expects me to obtain a model (json + data) and then convert the model to ONNX and use the ONNX model. The origin of the model will in most cases not be under my control.

PS: I can add the JSON and data file to the issue if required, the data file is 1.3MB. This would overcome the need to train the model manually.

My package versions are:

$ python --version
Python 3.8.4
$ pip list
Package                Version
---------------------- ---------
keras2onnx             1.7.1
onnx                   1.7.0
onnxconverter-common   1.7.0
tensorflow             2.2.0
jiafatom commented 3 years ago

Is there a reason you use target_opset=7? Current target_opset for 1.7 is opset 12.

CJCombrink commented 3 years ago

Is there a reason you use target_opset=7? Current target_opset for 1.7 is opset 12.

No reason, inexperience and unfamiliarity with the environment mostly.

Setting it to 12 results in the same issue.

jiafatom commented 3 years ago

Just go through this model, the error occurs when we try to freeze the tensorflow graph, the node input name here is 'model/gru/while/enter/_53' which seems in gru. Now the tf freeze step fails, it means that the model cannot freeze into tf graph. Then we cannot do much on our side.

CJCombrink commented 3 years ago

@jiafatom Thank you for taking the time to look into this. Is there any indication on why it can't freeze: is it a way the model is set up, is it something inside TF, something in the converter, etc?

As you should be aware, I am not the creator of the model, only the consumer. If this is something that the creator of the model must look into, how can I communicate that to the developer so that it can be addressed so that when I get a model I can covert it to onnx format without issues?