onnx / onnx-tensorflow

Tensorflow Backend for ONNX
Other
1.27k stars 296 forks source link

Conv.W in initializer but not in graph input #229

Open nheidloff opened 6 years ago

nheidloff commented 6 years ago

I'd like to use Watson Visual Recognition to generate a CoreML model which can be converted via this project into a TensorFlow model, so that I can run it on Android.

I've used this Watson sample to create the CoreML model: https://github.com/watson-developer-cloud/visual-recognition-coreml

Next I converted the model to ONNX via this tutorial: https://github.com/onnx/onnxmltools

When running the import script, I get this error:

root@a14232b63639:/volume# python import.py 
Traceback (most recent call last):
  File "import.py", line 4, in <module>
    tf_rep = prepare(model)
  File "/usr/local/lib/python2.7/dist-packages/onnx_tf/backend.py", line 345, in prepare
    super(TensorflowBackendBase, cls).prepare(model, device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx/backend/base.py", line 65, in prepare
    onnx.checker.check_model(model)
  File "/usr/local/lib/python2.7/dist-packages/onnx/checker.py", line 82, in check_model
    C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Conv.W in initializer but not in graph input

This is my import script:

import onnx
from onnx_tf.backend import prepare
model = onnx.load('/volume/DefaultCustomModel_2132124965.onnx')
tf_rep = prepare(model)

In order to run the script, I use Docker:

docker run -v /Users/nheidloff/Desktop/play/volume:/volume -it tensorflow/tensorflow:1.7.1-devel bash

In the container I invoke these commands:

apt-get update
apt-get install protobuf-compiler libprotoc-dev
pip install onnx
pip install onnx-tf
cd /volume
python import.py

Any help is appreciated !

fumihwh commented 6 years ago

@nheidloff It seems your onnx model has some problems. It can't pass the ONNX checker. Maybe you should ask at onnxmltools.