jakc4103 / DFQ

PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.
MIT License
258 stars 45 forks source link

Not working for inceptionv3 #25

Closed pksvision closed 4 years ago

pksvision commented 4 years ago

Hi.

Thanks for the great work.

I tried running the main_cls.py with relu equilize and correction true for InceptionV3 (from PytorchCV model zoo), and got about 0.002 accuracy, whereas its working fine for resnet18, resnet50 etc.

Note: M using the input size 299 for inceptionv3.

Appreciate your comment.

Thanks !

jakc4103 commented 4 years ago

There is something wrong to the equalized prediction results. I've double checked the equalize part and it seems worked as expected. (If you use "python main_cls.py --equalize --relu", it get the same accuracy as original model) And the set_quant_min_max part is ok, too. Actually if you use additional flags "--distill_range" and "--true_data", the accuract is still low on inceptionv3.

I suspect the reason is the change of activation range (or the weight range), which is not suitable for quantization on inceptionv3. Unfortunately, I don't have time to verify this since one have to check the feature maps (weight) layer by layer. Did you try the official implementation from aimet?

jakc4103 commented 4 years ago

Closing issue. Feel free to reopen