PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
334 stars 40 forks source link

Error with opening result ONNX model in C++ #104

Closed sergeygrosul closed 2 years ago

sergeygrosul commented 2 years ago

Issue Type

Bug, Feature Request

OS

Ubuntu

OS architecture

x86_64, aarch64

Programming Language

C++

Framework

ONNX, TensorFlow

Download URL for ONNX / OpenVINO IR

https://download.01.org/opencv/2020/openvinotoolkit/2020.4/open_model_zoo/models_bin/3/face-detection-0100/FP32/face-detection-0100.xml https://download.01.org/opencv/2020/openvinotoolkit/2020.4/open_model_zoo/models_bin/3/face-detection-0100/FP32/face-detection-0100.bin

Convert Script

H=256
W=256
MODEL=face-detection-0100
openvino2tensorflow \
--model_path ${MODEL}.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite \
--output_dynamic_range_quant_tflite \
--output_weight_quant_tflite \
--output_float16_quant_tflite \
--output_integer_quant_tflite \
--output_integer_quant_typ 'uint8' \
--string_formulas_for_normalization 'data / 255' \
--output_tfjs \
--output_coreml \
--weight_replacement_config replace.json

mv saved_model saved_model_${H}x${W}

openvino2tensorflow \
--model_path ${MODEL}.xml \
--output_saved_model \
--output_pb \
--output_onnx \
--onnx_opset 11 \
--keep_input_tensor_in_nchw \
--weight_replacement_config replace.json

mv saved_model/model_float32.onnx saved_model_${H}x${W}
rm -rf saved_model

openvino2tensorflow \
--model_path ${MODEL}.xml \
--output_saved_model \
--output_pb \
--output_tftrt_float32 \
--output_tftrt_float16 \
--weight_replacement_config replace.json

mv saved_model/tensorrt_saved_model_float32 saved_model_${H}x${W}
mv saved_model/tensorrt_saved_model_float16 saved_model_${H}x${W}
rm -rf saved_model

openvino2tensorflow \
--model_path ${MODEL}.xml \
--output_saved_model \
--output_pb \
--output_integer_quant_typ 'uint8' \
--string_formulas_for_normalization 'data / 255' \
--output_edgetpu \
--weight_replacement_config replace.json

mv saved_model/model_full_integer_quant.tflite saved_model_${H}x${W}
mv saved_model/model_full_integer_quant_edgetpu.tflite saved_model_${H}x${W}

Description

Hello! I'm trying to convert the face-detection-0100 model to ONNX. Based on these recommendations: https://github.com/PINTO0309/openvino2tensorflow/issues/89 https://github.com/PINTO0309/openvino2tensorflow/issues/52 I changed the XML file (removed the DetectionOutput layer) and seems successfully converted to ONNX. At least there were no errors during the conversion. But now I have two issues:

1) Opening of ONNX in C++ app If I open the ONNX model in C++ app with OpenCV

    net = cv::dnn::readNetFromONNX("models/model_float32.onnx");
    net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
    net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);

I get this error:

[ERROR:0@0.040] global /home/name/opencv/modules/dnn/src/onnx/onnx_importer.cpp (909) handleNode DNN/ONNX: ERROR during processing node with 3 inputs and 1 outputs: [Clip]:(StatefulPartitionedCall/model/tf.nn.relu6/Relu6:0) from domain='ai.onnx'
terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.5.5) /home/name/opencv/modules/dnn/src/onnx/onnx_importer.cpp:928: error: (-2:Unspecified error) in function 'handleNode'
> Node [Clip@ai.onnx]:(StatefulPartitionedCall/model/tf.nn.relu6/Relu6:0) parse error: OpenCV(4.5.5) /home/name/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1613: error: (-2:Unspecified error) in function 'void cv::dnn::dnn4_v20211220::ONNXImporter::parseClip(cv::dnn::dnn4_v20211220::LayerParams&, const opencv_onnx::NodeProto&)'
> >  (expected: 'node_proto.input_size() == 1'), where
> >     'node_proto.input_size()' is 3
> > must be equal to
> >     '1' is 1
> 
Aborted (core dumped)

But at the same time, the same ONNX file successfully opens and works in Python (I use this source as a base: https://github.com/PINTO0309/PINTO_model_zoo/blob/ce8421f2e8720d636d5f00e817debdc6720f235a/178_vehicle-detection-0200/demo/demo_vehicle-detection-0200_onnx.py). TensorFlow model model_float32.pb opens in my code correctly.

I can not get it, does it is the conversion error, or do I have something wrong with my C++ code?

2) Model returns the wrong results The model does not detect faces. I guess the problem is with my replace.json file. I tried to make it similar with (https://github.com/PINTO0309/PINTO_model_zoo/blob/ce8421f2e8720d636d5f00e817debdc6720f235a/178_vehicle-detection-0200/replace.json), but to be honest, I did not get it, how to correctly make this file, and what should its contain. It would be extremely great if you give some hints about how to create this file and what should be in it. BTW, I tested 227_face-detection-adas-0001 from your PINTO_model_zoo repo and seems it does not work too (does not find faces).

My files: face-detection-0100.zip

Relevant Log Output

TensorFlow/Keras model building process complete!
saved_model output started ==========================================================
saved_model output complete!
.pb output started ==================================================================
.pb output complete! - saved_model/model_float32.pb
numpy dataset load started ==========================================================
numpy dataset load complete!
Full Integer Quantization started ===================================================
Estimated count of arithmetic ops: 777.423 M  ops, equivalently 388.711 M  MACs
fully_quantize: 0, inference_type: 6, input_inference_type: 3, output_inference_type: 3
Estimated count of arithmetic ops: 777.423 M  ops, equivalently 388.711 M  MACs
WARNING:absl:Buffer deduplication procedure will be skipped when flatbuffer library is not properly loaded
Full Integer Quantization complete! - saved_model/model_full_integer_quant.tflite
EdgeTPU convertion started ==========================================================
Edge TPU Compiler version 16.0.384591198
Searching for valid delegate with step 1
Try to compile segment with 103 ops
Started a compilation timeout timer of 3600 seconds.

Model compiled successfully in 945 ms.

Input model: saved_model/model_full_integer_quant.tflite
Input size: 2.20MiB
Output model: saved_model/model_full_integer_quant_edgetpu.tflite
Output size: 2.66MiB
On-chip memory used for caching model parameters: 2.33MiB
On-chip memory remaining for caching model parameters: 4.46MiB
Off-chip memory used for streaming uncached model parameters: 192.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 103
Operation log: saved_model/model_full_integer_quant_edgetpu.log

Operator                       Count      Status

ADD                            10         Mapped to Edge TPU
SOFTMAX                        1          Mapped to Edge TPU
CONV_2D                        39         Mapped to Edge TPU
QUANTIZE                       3          Mapped to Edge TPU
MUL                            1          Mapped to Edge TPU
DEPTHWISE_CONV_2D              21         Mapped to Edge TPU
CONCATENATION                  2          Mapped to Edge TPU
PAD                            20         Mapped to Edge TPU
RESHAPE                        6          Mapped to Edge TPU
Compilation child process completed within timeout period.
Compilation succeeded! 

EdgeTPU convert complete! - saved_model/model_full_integer_quant_edgetpu.tflite
All the conversion process is finished! =============================================
user@2d7cd582832b:~/workdir$

Source code for simple inference testing code

No response

PINTO0309 commented 2 years ago

Try. https://github.com/PINTO0309/PINTO_model_zoo/tree/main/289_face-detection-0100

sergeygrosul commented 2 years ago

Hello Katsuya, thank you very much for your help!

I have tested this model, but the result is still not correct. Here is the example picture (faces detected by the same model with OpenVINO, so my goal is to repeat this result with the ONNX model). I guess I'm doing something wrong but I can not realize what exactly. I attached the test app (test-onnx.py open the static image demo.png):

test-onnx : face-detection-0100-F32-2020.4-Right.tar.zip

sergeygrosul commented 2 years ago

I figured out with this issue! (I used the wrong index in my demo app). Now everything is working fine! Domo arigatou!