yhhhli / APoT_Quantization

PyTorch implementation for the APoT quantization (ICLR 2020)
265 stars 51 forks source link

Some results about resnet20 on cifar10 #21

Open xiaolonghao opened 3 years ago

xiaolonghao commented 3 years ago

Hello, I am doing the quantization work now, I tried to reproduce your results, but unfortunately the results are not the same. The full accuracy reported in your paper is 91.6, but the result in the code is 92.9, which is much larger than the result reported in the paper, but in the 4bit quantization, the paper reports 92.3. It is unfair to compare full precision with quantization. Please confirm whether the results reported in the paper are wrong. In addition, what you report in your paper is the effect of ending the training test set or the result of getting the best test set during the training. It can be seen from the result that your paper is different from the result of the code, please give an explanation. In addition, my repeated results are different, I would like to ask whether the training results with multiple cards and single cards will be much worse, provided that other parameters remain the same. thank you 图片1