yhhhli / APoT_Quantization

PyTorch implementation for the APoT quantization (ICLR 2020)
258 stars 51 forks source link

Size and accuracy #12

Open Cyber-Neuron opened 3 years ago

Cyber-Neuron commented 3 years ago

Hi,

Based on the provided pretrained model (res18_2bit), I got 64.690% and the quantized model size is 5MB (gzip) or 3.4MB (7zip). It is quite different from the results in your paper. Can you please point out why is that? I just run: python main.py -a resnet18 --bit 2 --pretrained resnet18_2bit.pth

Thanks

Cyber-Neuron commented 3 years ago
Euphoria16 commented 3 years ago

Hi, I met the same problem. The reproduced accuracy is 64.74%, which is much lower than the result in the paper. Have you solved this problem?

Cyber-Neuron commented 3 years ago

Hi, I met the same problem. The reproduced accuracy is 64.74%, which is much lower than the result in the paper. Have you solved this problem?

Kind of. The batch size matters, however the accuracy is still around ~65% which is the same as other 2-bit quant methods.

yhhhli commented 3 years ago

Hi,

the accuracy mismatch is probably due to the different implementation of the data-loader between my training environment and the official pytorch environments.

Did you verify it through direct training?

yhhhli commented 3 years ago

Hi, I found a typo in the dataloader, can you test it now?