Closed vuk119 closed 5 years ago
The onnx model has to be using opset 7 i.e onnx 1.2.1/1.2/1.2.2 model. You can go to https://github.com/onnx/models and try the models that mention opset 7 / or onnx 1.2.1 or onnx 1.2 they should be working.
Can you try to create your model using onnx_model = onnxmltools.convert_keras(model, target_opset=7) onnxmltools.utils.save_model(onnx_model, 'modelConv2.onnx')
In order to check if the opset saved was indeed 7, run the following. (Note: Target 17763 can only load onnx model based on opset 7)
opset_version = onnx_model.opset_import[0].version
As Riaz mentioned, you'll want to ensure you are using a supported opset in your model. If you are still seeing issues after trying the conversion steps that Riaz provided, please reactivate and provide any new information.
I'm submitting a…
Bug report (I searched for similar issues and did not find one)
Current behavior
I am trying to run a model created by Keras backed by Tensorflow using WindowsML. I followed this tutorial https://docs.microsoft.com/en-us/windows/ai/windows-ml/get-started-desktop and It worked fine. The thing is that when try to write my own model and when I include a pooling layer and save my model as .onnx I get error like this when I try to load it using WindowsML in C++.
Exception thrown at 0x76B83442 in test1.exe: Microsoft C++ exception: winrt::hresult_error at memory location 0x005BF2A4.
When I do not have pooling layer everything works fine!
Minimal reproduction of the problem with instructions
Code for generating model:
Code for loading the model:
Environment
Windows Build Number: 18362.239
App min and target version: both 17763
OS Version (Server, IoT Core, Desktop, etc): Desktop
Visual Studio