Open xumengwei opened 3 years ago
Are you using the latest tf-nightly version? This is needed dependency as mentioned here #9394
Thanks @huberemanuel installing tf-nightly solved the above problem. However I still have a question.
I followed the step here for different ways to quantize the model. The original model size is 124630308 bytes. When I try "Convert using dynamic range quantization", the model size becomes 32380912 bytes. When I try "Convert using float fallback quantization", the model size is 32671552 bytes. But according to the tutorial, this approach should be able to further reduce the model size. Can you explain what's the difference between the two?
In addition, trying "Convert using integer-only quantization" gives the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-52-a25e448a979d> in <module>
11 converter.inference_output_type = tf.uint8
12
---> 13 tflite_model_quant = converter.convert()
14
15 interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
~/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self)
760 calibrate_and_quantize, flags = quant_mode.quantizer_flags()
761 if calibrate_and_quantize:
--> 762 result = self._calibrate_quantize_model(result, **flags)
763
764 flags_modify_model_io_type = quant_mode.flags_modify_model_io_type(
~/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in _calibrate_quantize_model(self, result, inference_input_type, inference_output_type, activations_type, allow_float)
479 return calibrate_quantize.calibrate_and_quantize(
480 self.representative_dataset.input_gen, inference_input_type,
--> 481 inference_output_type, allow_float, activations_type)
482
483 def _is_unknown_shapes_allowed(self):
~/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/optimize/calibrator.py in calibrate_and_quantize(self, dataset_gen, input_type, output_type, allow_float, activations_type, resize_input)
102 np.dtype(input_type.as_numpy_dtype()).num,
103 np.dtype(output_type.as_numpy_dtype()).num, allow_float,
--> 104 np.dtype(activations_type.as_numpy_dtype()).num)
105
106 def calibrate_and_quantize_single(self,
RuntimeError: Quantization not yet supported for op: 'CUSTOM'.
The issue has been reported in a previous post but seems there's no solution yet. Is it because full-integer quantization is not supported to SSD model yet?
@xumengwei I think it would be nice to open your new questions on a new issue, to make it easily accessible.
The pre-trained model I used: ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8. The code I used is below:
When I try on the saved model generated through object_detection/export_tflite_graph_tf2.py as described here, I got the following error:
I also tried the saved_model coming with the original pre-trained checkpoint coming from the model zoo, I got another error:
My ultimate goal is to test the accuracy of OD models after quantization. Now I cannot even convert model, let alone quantization. Can anyone help me on this?
System information