Closed GH-Jo closed 2 years ago
I tried to train quantized models on ImageNet.
In a high-bitwidth model like 8-bit, it seems to be trained well.
But the accuracy drop is so severe in the low-bitwidth model.
It may be caused by one-step shrinkage from the pretrained model to the low-bit model.
So, the progressive bitwidth shrinkage is needed to be introduced in auto compression.
I tried to train quantized models on ImageNet.
In a high-bitwidth model like 8-bit, it seems to be trained well.
But the accuracy drop is so severe in the low-bitwidth model.
It may be caused by one-step shrinkage from the pretrained model to the low-bit model.
So, the progressive bitwidth shrinkage is needed to be introduced in auto compression.