tensorflow / tpu

Reference models and tools for Cloud TPUs.
https://cloud.google.com/tpu/
Apache License 2.0
5.21k stars 1.77k forks source link

Quantization aware training use efficientnetlite0 #762

Open siyiding1216 opened 4 years ago

siyiding1216 commented 4 years ago

I used the released checkpoint to fine-tune a triplet model and seeing this message.

tensorflow.python.framework.errors_impl.NotFoundError: Key model/efficientnet-lite0/model/blocks_10/post_activation_bypass_quant/max not found in checkpoint

Above key is in Eval graph but not in checkpoint, I used tf.contrib.quantize.create_eval_graph() and tf.contrib.quantize.create_training_graph(quant_delay=0) in the training so I have the above key created.

But for the released ckp, there is no such thing. How was the ckp trained? It was not from quant aware training using above functions?

mingxingtan commented 4 years ago

Hi @siyiding1216 efficientnetlite models are trained with float32, and then we perform post-training quantization.

But of course, you can also do quantization-aware training either from scratch or load the float checkpoints.