PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
334 stars 40 forks source link

Onnx to OpenVino .. cant find the mo.py file #45

Closed letdivedeep closed 3 years ago

letdivedeep commented 3 years ago

hi @khursani8 @PINTO0309

I was following the blog

I have used the docker setup provided on the repo

$ docker pull pinto0309/openvino2tensorflow
or
$ docker build -t pinto0309/openvino2tensorflow:latest .

# If you don't need to access the GUI of the HostPC and the USB camera.
$ docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  pinto0309/openvino2tensorflow:latest

when I want to convert the onnx model to openvino, I am not able to get the installation dir of openvino to be input here in the {INTEL_OPENVINO_DIR} path

$ python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo.py \
  --input_model u2netp_320x320_opt.onnx \
  --input_shape [1,3,320,320] \
  --output_dir openvino/320x320/FP32 \
  --data_type FP32

Does the docker comes with openvino setup .. if so what path should be inputed

kuang-wei commented 3 years ago

Yes it does, it's under /opt/intel/openvino_2021

PINTO0309 commented 3 years ago

Screenshot 2021-07-23 00:35:38

letdivedeep commented 3 years ago

@PINTO0309 @kuang-wei thanks for the reply I was able to create an openvino model

But while running this cmd

openvino2tensorflow \
  --model_path openvino/mbv2_opt.xml \
  --model_output_path saved_model \
  --output_saved_model \
  --output_integer_quant_tflite \
  --output_full_integer_quant_tflite\
 --output_integer_quant_type uint8 \
 --output_tftrt \
 --output_edgetpu \
  --output_float16_quant_tflite

i am getting an error in the saving the saved model ERROR: Message tensorflow.SavedModel exceeds maximum protobuf size of 2GB: 2763310261

attached the snapshot below Screenshot 2021-07-23 at 12 25 30 PM

PINTO0309 commented 3 years ago

Unfortunately, files that are larger than 2 GB in size after conversion cannot be converted. This is not a limitation of openvino2tensorflow, but of a file format called Protocol Buffers developed by Google. Try outputting to a pb file. --output_pb

$ openvino2tensorflow \
  --model_path xxxx.xml \
  --output_saved_model \
  --output_pb

If you still get an error, the model size is too large and cannot be converted.