Open CJCombrink opened 3 years ago
Is there a reason you use target_opset=7
? Current target_opset
for 1.7
is opset 12
.
Is there a reason you use
target_opset=7
? Currenttarget_opset
for1.7
isopset 12
.
No reason, inexperience and unfamiliarity with the environment mostly.
Setting it to 12 results in the same issue.
Just go through this model, the error occurs when we try to freeze the tensorflow graph, the node input name here is 'model/gru/while/enter/_53'
which seems in gru. Now the tf freeze step fails, it means that the model cannot freeze into tf graph. Then we cannot do much on our side.
@jiafatom Thank you for taking the time to look into this. Is there any indication on why it can't freeze: is it a way the model is set up, is it something inside TF, something in the converter, etc?
As you should be aware, I am not the creator of the model, only the consumer. If this is something that the creator of the model must look into, how can I communicate that to the developer so that it can be addressed so that when I get a model I can covert it to onnx format without issues?
I am trying to convert a Keras model to onnx and get the above error.
I trained a model using this GitHub repo: https://github.com/fbadine/LSTNet. The result is a json and h5 file. I then load the model and when I try to convert it I get the error:
Steps to reproduce (from the linked github page, under Installation):
./electricity.sh
Load the model (in Python)
I did see the following issue might be related: #528 My concern/issue with the above implementation is that it might require a change in the exporting code from my understanding.
The problem that I am working on expects me to obtain a model (json + data) and then convert the model to ONNX and use the ONNX model. The origin of the model will in most cases not be under my control.
PS: I can add the JSON and data file to the issue if required, the data file is 1.3MB. This would overcome the need to train the model manually.
My package versions are: