hunglc007 / tensorflow-yolov4-tflite

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
https://github.com/hunglc007/tensorflow-yolov4-tflite
MIT License
2.23k stars 1.24k forks source link

Error message in loading TF Lite quantized model just after creating it #132

Open judahkshitij opened 4 years ago

judahkshitij commented 4 years ago

I have been experimenting with convert_tflite.py script to convert yolov4 to quantized tf lite model. The command I ran is:

python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite --quantize_mode int8 --dataset ~/office_imgs.txt

where office_imgs.txt file contains paths to some of my office images that I was trying to use to facilitate creation of quantized tf lite model. The script first converts and saves the tf lite model to the file specified in the --output option and then immediately calls demo() method to load the model just created and run some random input on it. However, the model fails to load in the demo() method with the following error msg:

RuntimeError: tensorflow/lite/kernels/dequantize.cc:61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.Node number 836 (DEQUANTIZE) failed to prepare.

Any help to debug and resolve this issue is greatly appreciated. Thanks.

judahkshitij commented 4 years ago

The stack trace for the above error is below:

Traceback (most recent call last): File "convert_tflite.py", line 76, in app.run(main) File "/Users/jkshitij/Development/tensorflow-yolov4-tflite/venv_yolov4_tflite/lib/python3.7/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/Users/jkshitij/Development/tensorflow-yolov4-tflite/venv_yolov4_tflite/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "convert_tflite.py", line 72, in main demo() File "convert_tflite.py", line 52, in demo interpreter.allocate_tensors() File "/Users/jkshitij/Development/tensorflow-yolov4-tflite/venv_yolov4_tflite/lib/python3.7/site-packages/tensorflow/lite/python/interpreter.py", line 243, in allocate_tensors return self._interpreter.AllocateTensors()

smalltingting commented 4 years ago

I met the same problem as below. RuntimeError: tensorflow/lite/kernels/dequantize.cc:61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.Node number 80 (DEQUANTIZE) failed to prepare. It seems that the output layer needs to DEQUANTIZE from int8 to fp32 .

judahkshitij commented 4 years ago

I met the same problem as below. RuntimeError: tensorflow/lite/kernels/dequantize.cc:61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.Node number 80 (DEQUANTIZE) failed to prepare. It seems that the output layer needs to DEQUANTIZE from int8 to fp32 .

@smalltingting Thanks for your comment. So is it complaining that it is unable to dequantize from int8 to fp32 because it is not in int8 type?

smalltingting commented 4 years ago

I met the same problem as below. RuntimeError: tensorflow/lite/kernels/dequantize.cc:61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.Node number 80 (DEQUANTIZE) failed to prepare. It seems that the output layer needs to DEQUANTIZE from int8 to fp32 .

@smalltingting Thanks for your comment. So is it complaining that it is unable to dequantize from int8 to fp32 because it is not in int8 type?

I have solved this problem. You can just convert the backbone network to tflite model. That means that you can get rid of dectction layers and concat layers when you convert.

archie9211 commented 4 years ago

@smalltingting can you please elaborate, how did you solve this error??

YLTsai0609 commented 3 years ago

@smalltingting would you like to share your solution? many thanks!

bebetocf commented 3 years ago

I had the same error and solved with #214