guichristmann / edge-tpu-tiny-yolo

Run Tiny YOLO-v3 on Google's Edge TPU USB Accelerator.
MIT License
102 stars 31 forks source link

Converting the provided darknet model to tflite results in a 'model not quantized' error #13

Closed alexanderswerdlow closed 4 years ago

alexanderswerdlow commented 4 years ago

Using the provided darknet model weights and cfg and the recommended repo to convert to Keras (h5) then to tflite, I find that the generated tflite is different from the tflite in the model folder.

This results in a 'model not quantized error' even though the provided quant_coco-tiny-v3-relu.tflite converts just fine.

Comparing the two tflite models (not compiled for the edge TPU), I find the only difference is this block but not sure what is causing the difference here. I was attempting to convert my own custom model but tried this as a (failed) sanity check.

quant_coco-tiny-v3-relu.tflite provided:

image

generated tflite using keras_to_tflite_quant.py (h5 generated with coco-tiny-v3-relu.cfg and coco-tiny-v3-relu.weights):

image

alexanderswerdlow commented 4 years ago

Found the solution from this StackOverflow post.

For reference I'm on Tensorflow 2.3.0.