Closed ChiHangChen closed 3 years ago
@ChiHangChen Your usage definitely looks correct, so I believe this is a bug with tflite conversion. Unfortunately, this issue is out of our hand, please open an issue here. My suggestion is to add some extra calibration steps also?
calibration_steps = 200
def representative_data_gen():
for i in range(calibration_steps):
for input_value in dataset.take(sample_size):
yield [input_value]
@ChiHangChen Your usage definitely looks correct, so I believe this is a bug with tflite conversion. Unfortunately, this issue is out of our hand, please open an issue here. My suggestion is to add some extra calibration steps also?
calibration_steps = 200 def representative_data_gen(): for i in range(calibration_steps): for input_value in dataset.take(sample_size): yield [input_value]
I did, I add calibration_steps up to 20000 but it still the same.
I use tensorflow-gpu==2.2.0 BTW.
@ChiHangChen hummm, just fyi, with tf2.x, these parameters are deprecated:
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
if you visualize your model with netron, I/O will still be float, are you aware of that? Could I see how you're setting the input tensors?
Feel free to reopen if this issue still persists.
I'm trying to convert my keras model into tflite quantized model so that I can run my model on coral TPU, but the output of my keras model and tflite model are significantly different.
The red points are quantized tflite model output, and blue points are original keras model output.
img is here
Here is my code to convert keras model to quantized tflite model :
X_train
is my training data, and I will scale input images value from 0 to 1 by divided255.
, so I do the same inrepresentative_data_gen
functions.Any assistance you can provide would be greatly appreciated.