PINTO0309 / tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
https://qiita.com/PINTO
MIT License
258 stars 38 forks source link

tflite2tensorflow Docker has flatc error #39

Closed IronPhoenixBlade closed 10 months ago

IronPhoenixBlade commented 11 months ago

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

C++

Framework

TensorFlowLite

Download URL for tflite file

https://storage.googleapis.com/mediapipe-assets/pose_detection.tflite

Convert Script

tflite2tensorflow

Description

I assume this is just a user error but I don't know how to debug it how what I'm doing differently from the instructions. I've downloaded the current Dockerfile and I'm attempting to run tflite2tensorflow with this command:

sudo docker run a670fceb0fe1 tflite2tensorflow --model_path pose_detection.tflite --flatc_path ./flatc --schema_path ./schema.fbs --output_pb

And I get this error:

...
FILEs may be schemas (must end in .fbs), binary schemas (must end in .bfbs),
or JSON files (conforming to preceding schema). FILEs after the -- must be
binary flatbuffer format files.
Output files are named using the base file name of the input,
and written to the current directory or the path given by -o.
example: ./flatc -c -b schema1.fbs schema2.fbs data.json
output json command = ./flatc -t --strict-json --defaults-json -o . ./schema.fbs -- pose_detection.tflite

which makes the script unable to do it's thing. This flatc file is built on v1.12.0 of the current repository.

When I copy paste that output command into another docker command

sudo docker run a670fceb0fe1 ./flatc -t --strict-json --defaults-json -o . ./schema.fbs -- pose_detection.tflite

I get the same error. However when I run it directly in my terminal, the command creates a json file. With all of that in mind am I doing something wrong? Btw I'm incredibly grateful that you have published all of these conversion programs! It's amazing :smile:

Relevant Log Output

sudo docker run a670fceb0fe1 tflite2tensorflow --model_path pose_detection.tflite --flatc_path ./flatc --schema_path ./schema.fbs --output_pb
./flatc: error: unable to load file: pose_detection.tflite
Usage: ./flatc [OPTION]... FILE... [-- FILE...]
  --binary         -b    Generate wire format binaries for any data definitions.
  --json           -t    Generate text output for any data definitions.
  --cpp            -c    Generate C++ headers for tables/structs.
  --go             -g    Generate Go files for tables/structs.
  --java           -j    Generate Java classes for tables/structs.
  --js             -s    Generate JavaScript code for tables/structs.
  --dart           -d    Generate Dart classes for tables/structs.
  --ts             -T    Generate TypeScript code for tables/structs.
  --csharp         -n    Generate C# classes for tables/structs.
  --python         -p    Generate Python files for tables/structs.
  --lobster              Generate Lobster files for tables/structs.
  --lua            -l    Generate Lua files for tables/structs.
  --rust           -r    Generate Rust files for tables/structs.
  --php                  Generate PHP files for tables/structs.
  --kotlin               Generate Kotlin classes for tables/structs.
  --jsonschema           Generate Json schema.
  --swift                Generate Swift files for tables/structs.
  -o PATH                Prefix PATH to all generated files.
  -I PATH                Search for includes in the specified path.
  -M                     Print make rules for generated files.
  --version              Print the version number of flatc and exit.
  --strict-json          Strict JSON: field names must be / will be quoted,
                         no trailing commas in tables/vectors.
  --allow-non-utf8       Pass non-UTF-8 input through parser and emit nonstandard
                         \x escapes in JSON. (Default is to raise parse error on
                         non-UTF-8 input.)
  --natural-utf8         Output strings with UTF-8 as human-readable strings.
                         By default, UTF-8 characters are printed as \uXXXX escapes.
  --defaults-json        Output fields whose value is the default when
                         writing JSON
  --unknown-json         Allow fields in JSON that are not defined in the
                         schema. These fields will be discared when generating
                         binaries.
  --no-prefix            Don't prefix enum values with the enum type in C++.
  --scoped-enums         Use C++11 style scoped and strongly typed enums.
                         also implies --no-prefix.
  --gen-includes         (deprecated), this is the default behavior.
                         If the original behavior is required (no include
                         statements) use --no-includes.
  --no-includes          Don't generate include statements for included
                         schemas the generated file depends on (C++ / Python).
  --gen-mutable          Generate accessors that can mutate buffers in-place.
  --gen-onefile          Generate single output file for C# and Go.
  --gen-name-strings     Generate type name functions for C++ and Rust.
  --gen-object-api       Generate an additional object-based API.
  --gen-compare          Generate operator== for object-based API types.
  --gen-nullable         Add Clang _Nullable for C++ pointer. or @Nullable for Java
  --java-checkerframe    work Add @Pure for Java.
  --gen-generated        Add @Generated annotation for Java
  --gen-all              Generate not just code for the current schema files,
                         but for all files it includes as well.
                         If the language uses a single file for output (by default
                         the case for C++ and JS), all code will end up in this one
                         file.
  --cpp-include          Adds an #include in generated file.
  --cpp-ptr-type T       Set object API pointer type (default std::unique_ptr).
  --cpp-str-type T       Set object API string type (default std::string).
                         T::c_str(), T::length() and T::empty() must be supported.
                         The custom type also needs to be constructible from std::string
                         (see the --cpp-str-flex-ctor option to change this behavior).
  --cpp-str-flex-ctor    Don't construct custom string types by passing std::string
                         from Flatbuffers, but (char* + length).
  --cpp-std CPP_STD      Generate a C++ code using features of selected C++ standard.
                         Supported CPP_STD values:
                          * 'c++0x' - generate code compatible with old compilers;
                          * 'c++11' - use C++11 code generator (default);
                          * 'c++17' - use C++17 features in generated code (experimental).
  --object-prefix        Customise class prefix for C++ object-based API.
  --object-suffix        Customise class suffix for C++ object-based API.
                         Default value is "T".
  --no-js-exports        Removes Node.js style export lines in JS.
  --goog-js-export       Uses goog.exports* for closure compiler exporting in JS.
  --es6-js-export        Uses ECMAScript 6 export style lines in JS.
  --go-namespace         Generate the overrided namespace in Golang.
  --go-import            Generate the overrided import for flatbuffers in Golang
                         (default is "github.com/google/flatbuffers/go").
  --raw-binary           Allow binaries without file_indentifier to be read.
                         This may crash flatc given a mismatched schema.
  --size-prefixed        Input binaries are size prefixed buffers.
  --proto                Input is a .proto, translate to .fbs.
  --proto-namespace-suffix Add this namespace to any flatbuffers generated
    SUFFIX                 from protobufs.
  --oneof-union          Translate .proto oneofs to flatbuffer unions.
  --grpc                 Generate GRPC interfaces for the specified languages.
  --schema               Serialize schemas instead of JSON (use with -b).
  --bfbs-comments        Add doc comments to the binary schema files.
  --bfbs-builtins        Add builtin attributes to the binary schema files.
  --bfbs-gen-embed       Generate code to embed the bfbs schema to the source.
  --conform FILE         Specify a schema the following schemas should be
                         an evolution of. Gives errors if not.
  --conform-includes     Include path for the schema given with --conform PATH
  --filename-suffix      The suffix appended to the generated file names.
                         Default is '_generated'.
  --filename-ext         The extension appended to the generated file names.
                         Default is language-specific (e.g., '.h' for C++)
  --include-prefix       Prefix this path to any generated include statements.
    PATH
  --keep-prefix          Keep original prefix of schema include statement.
  --no-fb-import         Don't include flatbuffers import statement for TypeScript.
  --no-ts-reexport       Don't re-export imported dependencies for TypeScript.
  --short-names          Use short function names for JS and TypeScript.
  --reflect-types        Add minimal type reflection to code generation.
  --reflect-names        Add minimal type/name reflection.
  --root-type T          Select or override the default root_type
  --force-defaults       Emit default values in binary output from JSON
  --force-empty          When serializing from object API representation,
                         force strings and vectors to empty rather than null.
  --force-empty-vectors  When serializing from object API representation,
                         force vectors to empty rather than null.
  --flexbuffers          Used with "binary" and "json" options, it generates
                         data using schema-less FlexBuffers.
FILEs may be schemas (must end in .fbs), binary schemas (must end in .bfbs),
or JSON files (conforming to preceding schema). FILEs after the -- must be
binary flatbuffer format files.
Output files are named using the base file name of the input,
and written to the current directory or the path given by -o.
example: ./flatc -c -b schema1.fbs schema2.fbs data.json
output json command = ./flatc -t --strict-json --defaults-json -o . ./schema.fbs -- pose_detection.tflite
Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 6608, in <module>
    main()
  File "/usr/local/bin/tflite2tensorflow", line 5859, in main
    ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)
  File "/usr/local/bin/tflite2tensorflow", line 247, in parse_json
    j = json.load(open(jsonfile_path))
FileNotFoundError: [Errno 2] No such file or directory: './pose_detection.json'

Source code for simple inference testing code

No response

PINTO0309 commented 11 months ago

You probably just don't see the tflite file from within the container because you haven't mounted the host PC drive with the -v `pwd`:/home/user/workdir option.

PINTO0309 commented 11 months ago

If you are not familiar with how to handle containers, it may be more efficient to convert your model using the following procedure instead of using containers on the host PC.

pip install tensorflow onnxruntime tf2onnx onnx2tf

python -m tf2onnx.convert \
--opset 11 \
--tflite pose_detection.tflite \
--output pose_detection.onnx \
--inputs-as-nchw input_1 \
--dequantize

onnx2tf -i pose_detection.onnx -o saved_model -osd
IronPhoenixBlade commented 10 months ago

Thanks for the quick responses and multiple routes to fix!

Trying the onnx approach I get another error, probably having to do with this particular model. To avoid clogging up this message here's a gdoc with the logs I got: https://docs.google.com/document/d/1Hs1QWfnzXiqodGM7CxKXxkhcJE4y1ynNfn825ZYUBYQ/edit?usp=sharing

In case I take the document down, here are the relevant errors when I ran tf2onnx.convert:

/usr/lib/python3.10/runpy.py:126: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour
  warn(RuntimeWarning(msg))

...

2023-08-14 12:06:50,907 - WARNING - Error loading model into tflite interpreter: Interpreter._get_tensor_details() missing 1 required positional argument: 'subgraph_index'
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/convert.py", line 467, in <module>
    main()
  File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/convert.py", line 236, in main
    model_proto, _ = _convert_common(
  File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/convert.py", line 153, in _convert_common
    g = process_tf_graph(tf_graph, const_node_values=const_node_values, **kwargs)
  File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/tfonnx.py", line 474, in process_tf_graph
    parse_tflite_graph(tfl_graph, opcodes, model, prefix, tensor_shapes_from_interpreter)
  File "/home/weasel/.local/lib/python3.10/site-packages/tf2onnx/tflite_utils.py", line 301, in parse_tflite_graph
    np_data = tensor_util.MakeNdarray(t)
  File "/home/weasel/.local/lib/python3.10/site-packages/tensorflow/python/framework/tensor_util.py", line 663, in MakeNdarray
    dtype=dtype).copy().reshape(shape))
ValueError: cannot reshape array of size 96 into shape (16,1,1,24)

The -vpwd:/home/user/workdir seems promising on my end, I'm just having issues finding the OpenVINO installation folder (setvar.sh not finding the python binaries, etc). That's not tflite2tensorflow's issue though - will post another comment when I'm able to get that to work.

PINTO0309 commented 10 months ago

It's easy to suggest problems and solutions. But explaining them is a pain in the ass. In fact, you don't have to do that.

Here is the model I converted.

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/053_BlazePose