yhhhli / APoT_Quantization

PyTorch implementation for the APoT quantization (ICLR 2020)
258 stars 51 forks source link

Imagenet Accuracy quickly dropping #6

Closed mostafaelhoushi closed 4 years ago

mostafaelhoushi commented 4 years ago

When I try this command in the ImageNet folder:

python main.py -a resnet18 -b 5 --data <path to imagenet directory>

I get this log. Is that expected?

image

yhhhli commented 4 years ago

Hi, have your tried to use a lower learning rate?

yhhhli commented 4 years ago

Hello, mostafaelhoushi,

I train our model using our internal framework, so the training codes may have some bugs. Thanks for discovering this problem, here are the suggestions and maybe we can find the problem together:

Try to use full precision model, by setting bit=32: IF the model can be trained, then the problem must be quantization. Try to not learn the clipping threshold, just set the LR of alpha to 0, and see if the model can be trained. IF the FP model cannot be trained, then the problem must be hyper-parameters Try to use a lower LR

mostafaelhoushi commented 4 years ago

Thanks @yhhhli . I played around with learning rate and batch size. When I set the batch size to 128 and learning rate 0.001, the training accuracy starts with around 70% and reaches 89% soon. There could be a better combination of learning rate and batch size.

Just a side note: looking at the code, the default batch size seems to be 1024. However, when we run main.py without setting batch size, the number of batches in the log shown in the screenshot is 256, 234. If the size of Imagenet training set is around 1 million, then this means the batch size is about 4. I don't know how the code seems to have a default batch size of 1024 but when running the code the actual default batch size is 4. Batch size of 4 is expected to cause this degradation in accuracy with the default learning rate.

mostafaelhoushi commented 4 years ago

I found the cause of the problem, I mistakenly used -b 5 to set the bitwidth to 5, while -b actually sets the batch size. Sorry for that!