PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
334 stars 40 forks source link

Layer data type changes during conversion #140

Closed svobora closed 1 year ago

svobora commented 1 year ago

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

OpenVINO

Download URL for ONNX / OpenVINO IR

pip

Convert Script

        openvino_out_dir = f"{args.destination_dir}/openvino"
        os.makedirs(openvino_out_dir, exist_ok=True)

        print(f"Generating openvino at: {openvino_out_dir}")
        cmd = [ 
            sys.executable, mo_main.__file__, 
            '--input_model', args.destination_dir + "/model.onnx",
            '--input_shape', "[" + ",".join([str(x) for x in fake_input.shape]) + "]",
            '--output_dir', openvino_out_dir
        ]

        # retcode, retmsg = run_shell_cmd(cmd,  outfile=sys.stdout)
        retcode = os.system(" ".join(cmd))
        assert retcode == 0, 'Failed to do conversion' 

        openvino2tensorflow_out_dir = f"{args.destination_dir}/openvino2tensorflow"
        openvino_xml_name = "model.xml"

        print(f'Generating openvino2tensorflow model at: {openvino2tensorflow_out_dir} ...')
        cmd = ['openvino2tensorflow'] + [ 
            '--model_path', f'{openvino_out_dir}/{openvino_xml_name}',
            '--model_output_path', openvino2tensorflow_out_dir,
            '--output_saved_model',
            '--output_no_quant_float32_tflite'
        ]

        retcode = os.system(" ".join(cmd))
        assert retcode == 0, 'Failed to do conversion' 

Description

Input to my network is uint8, the first layer is torch.cast, casting it to float32. ONNX has correct input, also openvino xml correctly sets input layers element_type=u8, but tensorflow protobufer input layer is float32.

Relevant Log Output

<?xml version="1.0"?>
<net name="torch_jit" version="11">
    <layers>
        <layer id="0" name="input" type="Parameter" version="opset1">
            <data shape="1,3,160,160" element_type="u8" />
            <rt_info>
                <attribute name="old_api_map_element_type" version="0" value="f32" />
            </rt_info>
            <output>
                <port id="0" precision="U8" names="input">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>160</dim>
                    <dim>160</dim>
                </port>
            </output>
        </layer>
        <layer id="1" name="/input_model/Cast" type="Convert" version="opset1">
            <data destination_type="f32" />
            <input>
                <port id="0" precision="U8">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>160</dim>
                    <dim>160</dim>
                </port>
            </input>
            <output>
                <port id="1" precision="FP32" names="/input_model/Cast_output_0">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>160</dim>
                    <dim>160</dim>
                </port>
            </output>
        </layer>
        <layer id="2" name="Constant_8938" type="Const" version="opset1">
            <data element_type="f32" shape="1, 1, 1, 1" offset="0" size="4" />
            <output>
                <port id="0" precision="FP32">
                    <dim>1</dim>
                    <dim>1</dim>
                    <dim>1</dim>
                    <dim>1</dim>
                </port>
            </output>
        </layer>
        <layer id="3" name="/input_model/Div" type="Multiply" version="opset1">
            <data auto_broadcast="numpy" />
            <input>
                <port id="0" precision="FP32">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>160</dim>
                    <dim>160</dim>
                </port>
                <port id="1" precision="FP32">
                    <dim>1</dim>
                    <dim>1</dim>
                    <dim>1</dim>
                    <dim>1</dim>
                </port>
            </input>
            <output>
                <port id="2" precision="FP32" names="/input_model/Div_output_0">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>160</dim>
                    <dim>160</dim>
                </port>
            </output>
        </layer>
        <layer id="4" name="onnx::Conv_1084" type="Const" version="opset1">
            <data element_type="f32" shape="16, 3, 3, 3" offset="4" size="1728" />
            <output>
                <port id="0" precision="FP32" names="onnx::Conv_1084">
                    <dim>16</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                </port>
            </output>
        </layer>
        <layer id="5" name="/backbone_model/features/Conv2d_0/Conv2d_0.1/Conv/WithoutBiases" type="Convolution" version="opset1">
            <data strides="2, 2" dilations="1, 1" pads_begin="0, 0" pads_end="1, 1" auto_pad="explicit" />
            <input>
                <port id="0" precision="FP32">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>160</dim>
                    <dim>160</dim>
                </port>
                <port id="1" precision="FP32">
                    <dim>16</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                </port>
            </input>
            <output>
                <port id="2" precision="FP32">
                    <dim>1</dim>
                    <dim>16</dim>
                    <dim>80</dim>
                    <dim>80</dim>
                </port>
            </output>
        </layer>

Source code for simple inference testing code

No response

PINTO0309 commented 1 year ago

That is the specification.

openvino2tensorflow is not motivated to maintain for the time being. In addition, I am working on a tool to convert from ONNX to TensorFlow, which is a significant improvement over this tool.

https://github.com/PINTO0309/onnx2tf

Same policy for both tools, I will not investigate unless an onnx file is provided.