Using the provided darknet model weights and cfg and the recommended repo to convert to Keras (h5) then to tflite, I find that the generated tflite is different from the tflite in the model folder.
This results in a 'model not quantized error' even though the provided quant_coco-tiny-v3-relu.tflite converts just fine.
Comparing the two tflite models (not compiled for the edge TPU), I find the only difference is this block but not sure what is causing the difference here. I was attempting to convert my own custom model but tried this as a (failed) sanity check.
quant_coco-tiny-v3-relu.tflite provided:
generated tflite using keras_to_tflite_quant.py (h5 generated with coco-tiny-v3-relu.cfg and coco-tiny-v3-relu.weights):
Using the provided darknet model weights and cfg and the recommended repo to convert to Keras (h5) then to tflite, I find that the generated tflite is different from the tflite in the model folder.
This results in a 'model not quantized error' even though the provided
quant_coco-tiny-v3-relu.tflite
converts just fine.Comparing the two tflite models (not compiled for the edge TPU), I find the only difference is this block but not sure what is causing the difference here. I was attempting to convert my own custom model but tried this as a (failed) sanity check.
quant_coco-tiny-v3-relu.tflite provided:
generated tflite using keras_to_tflite_quant.py (h5 generated with coco-tiny-v3-relu.cfg and coco-tiny-v3-relu.weights):