Closed willfu closed 4 years ago
Weights are automatically dequantised by TFJS so it's normal. Quantised models therefore cannot be converted as-is.
Thanks for the reply. But it seems the model dequantised is not right, I just use the model that could not detect normal as the float one, I guess there is some issue of this converted model. @patlevin
@willfu Quant models have problems when converted back to float. It's best to only convert float models and do Post-training quantization in TF or TFLite.
when I use this tool to convert the quant model such as posenet mobilenet_quant2_100, the size of the frozen model converted seems the same with float one, while not quantization, is it normal? thanks.