Closed FrancPS closed 4 years ago
--saved-model
needs to be the directory of the saved-model. But I belief in this case the model comes as a checkpoint so you'd might need to use --checkpoint
. If you use a checkpoint you need to also pass in --inputs
and --outputs
since the checkpoint does not have that information.
--saved-model
needs to be the directory of the saved-model. But I belief in this case the model comes as a checkpoint so you'd might need to use--checkpoint
. If you use a checkpoint you need to also pass in--inputs
and--outputs
since the checkpoint does not have that information.
Thanks for the answer, that is probably it. Indeed, there is a .meta file i downloaded. In this case, how do i know what are the inputs and outputs of this model?
The command would be this one I guess
python -m tf2onnx.convert --checkpoint tensorflow-model-meta-file-path --output model.onnx --inputs input0:0,input1:0 --outputs output0:0
So, what should I look for, to write there?
For the inputs, look for 'Placeholder' in the source code. Output is hard to find - I'd look for some demo/example that runs the inference ... like session.run([the_output:0], feed_dict=...)
we assume this is resolved.
=( not solved
Finding the inputs and outputs? For a checkpoint there is no simple rule since all variables+placeholder could be input and all nodes could be outputs. Easiest might be if you have some eval.py script for tensorflow ... the names are somewhere in there. Or if you look at the meta file (ie with netron): the inputs are most likely Placeholder ops and the output is most likely some node the is not consumed by somebody else. If this is a public model send us a link and we can take a look what the inputs/outputs are.
Hi everyone! I got the same problem with input node name, output name of model.meta. I try to use Tensorboard to visualize the model and get the input, output (so many nodes to choose from). If anyone can help me to figure out what is the real input, output, it will be very appreciated! My script: python -m tf2onnx.convert --checkpoint model.ckpt-0.meta --output model.onnx --inputs pfld_conv1:0 --outputs pfld_inference:0
model.meta: https://drive.google.com/file/d/1lNtXaZqazCqdQOHm_iaCnO9Ur2_TczqC/view
Hi! I am stuck on this as well...
Does @davidsandberg have any insight?
System information
Issue I downloaded a tensorflow model of FaceNet from this page, and I'm trying to convert it from .pb into a .onnx file, however it raises the following error:
To Reproduce root@xesk-VirtualBox:/home/xesk/Desktop# python -m tf2onnx.convert --saved-model home/xesk/Desktop/2s/20180402-114759/20180402-114759.pb --output model.onnx
2020-08-03 20:18:05.081538: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory 2020-08-03 20:18:05.081680: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2020-08-03 20:18:07,431 - WARNING - '--tag' not specified for saved_model. Using --tag serve Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.8/dist-packages/tf2onnx/convert.py", line 171, in
main()
File "/usr/local/lib/python3.8/dist-packages/tf2onnx/convert.py", line 131, in main
graph_def, inputs, outputs = tf_loader.from_saved_model(
File "/usr/local/lib/python3.8/dist-packages/tf2onnx/tf_loader.py", line 288, in from_saved_model
_from_saved_model_v2(model_path, input_names, output_names, tag, signatures, concrete_function)
File "/usr/local/lib/python3.8/dist-packages/tf2onnx/tf_loader.py", line 247, in _from_saved_model_v2
imported = tf.saved_model.load(model_path, tags=tag) # pylint: disable=no-value-for-parameter
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 603, in load
return load_internal(export_dir, tags, options)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 614, in load_internal
loader_impl.parse_saved_model_with_debug_info(export_dir))
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 56, in parse_saved_model_with_debug_info
saved_model = _parse_saved_model(export_dir)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 110, in parse_saved_model
raise IOError("SavedModel file does not exist at: %s/{%s|%s}" %
OSError: SavedModel file does not exist at: home/xesk/Desktop/2s/20180402-114759/20180402-114759.pb/{saved_model.pbtxt|saved_model.pb}
Additional context I'm not running any CUDA or similars, only CPU. The model downloaded is the 20180402-114759. It's the first time I'm working with this tools, and I'm a bit of a beginner in this AI world, so I might be missing something obvious. Of course, I checked the path and the command syntax several times. Might be something to do with the format of the files i downloaded?