PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
334 stars 40 forks source link

Regeneration of YOLOX model with quantize INT8 #69

Closed JunhoohnuJ closed 3 years ago

JunhoohnuJ commented 3 years ago

Hi, I'm trying to regeneration of YOLOX nano model with quantize INT8 using your onnx and openvino files(get from PINTO_model_zoo). I also use your docker image and convert_script. I can run using your quantize int8 tflite file but, can not run my quantized model using your onnx or openvino files. I got error message "node XXX(tflite nnapi delegate) failed to invoke". but, I can run quantize model only cpu or no quantize(FP32) model. Really confused this situation can you help solve this problem?

PINTO0309 commented 3 years ago

It's probably a problem with NNAPI.

JunhoohnuJ commented 3 years ago

Thanks for reply. I wonder why your quantize model(get from PINTO_model_zoo) running on my chip. So, can you tell me version of ONNX, Openvino, openvino2tensorflow, tensorflow when you convert YOLOX? I will try to make same environment and convert.

PINTO0309 commented 3 years ago

I am only using the Docker containers listed in the README.