Closed sayakpaul closed 4 years ago
Can you try running inference with the float TF Lite model using the images in /content/sample_images/
? The error says your images are invalid. Beside, from my experience, you'll need about 100 images as representative_dataset for good conversion result.
@khanhlvg I was able to export the int8 model. Currently, only three images are being provided to the representative dataset generation. Here's the Colab Notebook.
I will try with the original DIV2K dataset and gather 100 samples from there.
Thanks for the update! It's good to know that you were able to fix the error.
I will try with the original DIV2K dataset and gather 100 samples from there.
FYI there's no hard rule on how many samples are enough. For some models, a dozen of samples can be enough. For some other models, it requires more. From my experience, 100 samples work fine for all the model that I've tried.
FYI there's no hard rule on how many samples are enough. For some models, a dozen of samples can be enough. For some other models, it requires more. From my experience, 100 samples work fine for all the model that I've tried.
Yes, that's what my understanding is as well. For example, the CartoonGAN int8 model was converted using only five images.
Yes, that's what my understanding is as well. For example, the CartoonGAN int8 model was converted using only five images.
FYI in theory, you can convert with just 1 representative image :) However if your representative dataset is not large enough, then your converted model quality will suffer.
Yes, exactly. Depends on how well the activations are being calibrated.
@khanhlvg here's the updated notebook that shows the int8 quantization process using 100 training images from the DIV2k dataset (on which the original model was trained).
@margaretmz the models are hosted here. For easy navigation, it's mentioned here as well.
@khanhlvg as far as I know
TRANSPOSE_CONV
op is supported in TFLite (reference: https://www.tensorflow.org/lite/guide/ops_compatibility#tensorflow_lite_operations). When I tried converting the original model using the int8 quantization recipe (i.e. with a representative dataset) I got the following error:Here's the Colab Notebook that can reproduce this issue.
Cc: @margaretmz