Open wangjun-0 opened 1 year ago
@wangjun-0, when performing Bias correction, we are passing the default configuration. Is that what you used for your evaluation as well?
Your results are a bit strange, we don't expect to see good accuracy after Bias correction until and unless it is a quantized evaluation since bias correction changes the bias in such a way that the quantized model's accuracy and the original's model's accuracy is similar.
I used AIMET example of Tensoflow MobileNet quantization(using Examples/tensorflow/quantization/cle_bc.py). The accuracy of TOP1 and TOP5 is good, but the accuracy of quantization mode is bad. The result is as below: 2023-04-24 15:53:54,529 - Eval - INFO - Avg accuracy Top 1: 0.767137 Avg accuracy Top 5: 0.930444 on validation Dataset 2023-04-24 15:53:54,529 - TensorFlowCleBc - INFO - Original Model Top-1 accuracy = 0.77
2023-04-24 15:58:31,534 - Eval - INFO - Avg accuracy Top 1: 0.030242 Avg accuracy Top 5: 0.213710 on validation Dataset 2023-04-24 15:58:31,534 - TensorFlowCleBc - INFO - Original Model Top-1 accuracy on Quant Simulator = 0.03
The result is very bad. How can I config the Quant Simulator to let the result good?
Thanks.