Closed JunhoohnuJ closed 3 years ago
It's probably a problem with NNAPI.
Thanks for reply. I wonder why your quantize model(get from PINTO_model_zoo) running on my chip. So, can you tell me version of ONNX, Openvino, openvino2tensorflow, tensorflow when you convert YOLOX? I will try to make same environment and convert.
I am only using the Docker containers listed in the README.
Hi, I'm trying to regeneration of YOLOX nano model with quantize INT8 using your onnx and openvino files(get from PINTO_model_zoo). I also use your docker image and convert_script. I can run using your quantize int8 tflite file but, can not run my quantized model using your onnx or openvino files. I got error message "node XXX(tflite nnapi delegate) failed to invoke". but, I can run quantize model only cpu or no quantize(FP32) model. Really confused this situation can you help solve this problem?