PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
334 stars 40 forks source link

vgg model is killed #98

Closed hayoyo12 closed 2 years ago

hayoyo12 commented 2 years ago

Issue Type

Bug, Feature Request, Others

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

OpenVINO, ONNX, TensorFlow

Download URL for ONNX / OpenVINO IR

wget https://s3.amazonaws.com/onnx-model-zoo/vgg/vgg16/vgg16.onnx

Convert Script

# load onnx model
onnx_model = onnx.load(onnx_path)

# convert from onnnx to openVINO IR
source ../openvino_env/bin/activate
mo --input_model $onnx --output_dir $save_dir
deactivate

# convert from openVINO IR to TensorFlow using openvino2tensorflow
openvino2tensorflow --model_path $input_path --model_output_path $output_path --output_saved_model

Description

While converting the openVino IR file to Tensorflow using 'openvino2tensorflow', it suddenly stopped nearly at the bottom node. It does not return any log.

Please check the log images: image

Relevant Log Output

70361 Killed                  openvino2tensorflow --model_path $input_path --model_output_path $output_path --output_saved_model

The number, where is placed at '# killed', keeps changing.

Source code for simple inference testing code

No response

PINTO0309 commented 2 years ago

Perhaps it is not a problem with the specifications of this tool. Is there enough RAM in your device? It seems that TensorFlow consumes 60GB of RAM in the process of generating saved_model in the backend.

I dare to suggest that you should download VGG16 in saved_model format from the beginning.

docker run -it --rm \
-v `pwd`:/home/user/workdir \
ghcr.io/pinto0309/openvino2tensorflow:latest

wget https://s3.amazonaws.com/onnx-model-zoo/vgg/vgg16/vgg16.onnx

MODEL=vgg16
H=224
W=224

onnxsim ${MODEL}.onnx ${MODEL}_${H}x${W}.onnx

$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model ${MODEL}_${H}x${W}.onnx \
--data_type FP32 \
--output_dir ${MODEL}_${H}x${W}/openvino/FP32 \
--model_name ${MODEL}_${H}x${W}

openvino2tensorflow \
--model_path ${MODEL}_${H}x${W}/openvino/FP32/${MODEL}_${H}x${W}.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite \
--non_verbose

pb_to_saved_model \
--pb_file_path saved_model/model_float32.pb \
--inputs inputs:0 \
--outputs model/tf.math.add_15/Add:0

mv saved_model_from_pb/* saved_model

image image

hayoyo12 commented 2 years ago

@PINTO0309 I will check the RAM again. Thank you!