Open ividal opened 6 years ago
Just in case I repeated everything with Tensorflow 1.5.0, since it's the last version explicitly mentioned in the documentation, but the error is exactly the same.
[Edit] For the sake of completeness, I tried freezing the graph with bazel-built tool, as the original tutorial suggested. Same results.
bazel build tensorflow/python/tools:freeze_graph
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=/home/ividal/dev/onnx/tutorials/tutorials/graph.proto \
--input_checkpoint=/home/ividal/dev/onnx/tutorials/tutorials/ckpt/model.ckpt \
--output_graph=/tmp/frozen_graph.pb \
--output_node_names=fc2/add \
--input_binary=True
i meet the same problem with u, does anyone has some solution??
I have a feeling it's this problem. NCHW vs NHWC at the different steps: training vs freezing vs exporting vs loading in onnx. Just don't know exactly where or how to fix it.
@ividal Did you find a solution to this issue yet? Please let me know.
Did anyone found the solution to the issue ?
Did anyone found the solution to the issue ?
@knandanan No, sorry, opted to keep onnx and TF separate for this (I stick to TF and a .pb if deployment had to be with TF, e.g. an android device).
Python version: 3.5.2 onnx==1.2.1 onnx-tf==1.1.2 tensorflow-gpu==1.8.0 Using tutorial as of this commit.
Following the instructions in the tutorial, I've used this script to train. Worked smoothly. I froze the model using:
This produced the expected
/tmp/frozen_graph.pb
. The export code in the tutorial provides the expectedmnist.onnx
file.model = onnx.load('mnist.onnx')
works, but:tf_rep = prepare(model)
yields:From the error message, I gather the expected channels might be switched (?). However, I did not modify the tutorial code, so it shouldn't be that. Any ideas...?
Thanks!