Closed IronPhoenixBlade closed 10 months ago
You probably just don't see the tflite file from within the container because you haven't mounted the host PC drive with the -v `pwd`:/home/user/workdir
option.
If you are not familiar with how to handle containers, it may be more efficient to convert your model using the following procedure instead of using containers on the host PC.
pip install tensorflow onnxruntime tf2onnx onnx2tf
python -m tf2onnx.convert \
--opset 11 \
--tflite pose_detection.tflite \
--output pose_detection.onnx \
--inputs-as-nchw input_1 \
--dequantize
onnx2tf -i pose_detection.onnx -o saved_model -osd
Thanks for the quick responses and multiple routes to fix!
Trying the onnx approach I get another error, probably having to do with this particular model. To avoid clogging up this message here's a gdoc with the logs I got: https://docs.google.com/document/d/1Hs1QWfnzXiqodGM7CxKXxkhcJE4y1ynNfn825ZYUBYQ/edit?usp=sharing
In case I take the document down, here are the relevant errors when I ran tf2onnx.convert
:
/usr/lib/python3.10/runpy.py:126: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
...
2023-08-14 12:06:50,907 - WARNING - Error loading model into tflite interpreter: Interpreter._get_tensor_details() missing 1 required positional argument: 'subgraph_index'
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/convert.py", line 467, in <module>
main()
File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/convert.py", line 236, in main
model_proto, _ = _convert_common(
File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/convert.py", line 153, in _convert_common
g = process_tf_graph(tf_graph, const_node_values=const_node_values, **kwargs)
File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/tfonnx.py", line 474, in process_tf_graph
parse_tflite_graph(tfl_graph, opcodes, model, prefix, tensor_shapes_from_interpreter)
File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/tflite_utils.py", line 301, in parse_tflite_graph
np_data = tensor_util.MakeNdarray(t)
File "/home/weasel/.local/lib/python3.10/site-packages/tensorflow/python/framework/tensor_util.py", line 663, in MakeNdarray
dtype=dtype).copy().reshape(shape))
ValueError: cannot reshape array of size 96 into shape (16,1,1,24)
The -v
pwd:/home/user/workdir
seems promising on my end, I'm just having issues finding the OpenVINO installation folder (setvar.sh not finding the python binaries, etc). That's not tflite2tensorflow's issue though - will post another comment when I'm able to get that to work.
It's easy to suggest problems and solutions. But explaining them is a pain in the ass. In fact, you don't have to do that.
Here is the model I converted.
https://github.com/PINTO0309/PINTO_model_zoo/tree/main/053_BlazePose
Issue Type
Bug
OS
Ubuntu
OS architecture
x86_64
Programming Language
C++
Framework
TensorFlowLite
Download URL for tflite file
https://storage.googleapis.com/mediapipe-assets/pose_detection.tflite
Convert Script
tflite2tensorflow
Description
I assume this is just a user error but I don't know how to debug it how what I'm doing differently from the instructions. I've downloaded the current Dockerfile and I'm attempting to run
tflite2tensorflow
with this command:And I get this error:
which makes the script unable to do it's thing. This
flatc
file is built on v1.12.0 of the current repository.When I copy paste that output command into another docker command
I get the same error. However when I run it directly in my terminal, the command creates a json file. With all of that in mind am I doing something wrong? Btw I'm incredibly grateful that you have published all of these conversion programs! It's amazing :smile:
Relevant Log Output
Source code for simple inference testing code
No response