SKKU-ESLAB / Auto-Compression

Automatic DNN compression tool with various model compression and neural architecture search techniques
MIT License
20 stars 20 forks source link

Progressive bitwidth shrinkage for quantized model #7

Closed GH-Jo closed 2 years ago

GH-Jo commented 3 years ago

I tried to train quantized models on ImageNet.

In a high-bitwidth model like 8-bit, it seems to be trained well.

But the accuracy drop is so severe in the low-bitwidth model.

It may be caused by one-step shrinkage from the pretrained model to the low-bit model.

So, the progressive bitwidth shrinkage is needed to be introduced in auto compression.