jakc4103 / DFQ

PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.
MIT License
258 stars 45 forks source link

should give download source of the datasets in /home/jakc4103/WDesktop/dataset/ILSVRC/Data/CLS-LOC/val #13

Closed ClarkChin08 closed 4 years ago

ClarkChin08 commented 4 years ago

in file main_cls.py

jakc4103 commented 4 years ago

you can download imagenet validation set from kaggle or academictorrents

ClarkChin08 commented 4 years ago

another question is about 16-bits Quantization for Bias, that means the bias is 16-bits? or the weight should be quantized to 16-bits? if the weight was quantized to 16-bits, the accuracy improved may come from the high precision of weight, right?

jakc4103 commented 4 years ago

weight: 8 bits activation: 8 bits bias: 16 bits

All quantization in this repo is per-tensor based.

bangawayoo commented 4 years ago

Bias quantized to 8 bits also seems to work fine for me.

jakc4103 commented 4 years ago

@bangawayoo well, that's good to know! It probably because I've fixed some minor bugs in set_quant_minmax function recently. I'll update the 8 bits results after I test it.