yhhhli / APoT_Quantization

PyTorch implementation for the APoT quantization (ICLR 2020)
258 stars 51 forks source link

Weight normalization #4

Open GendalfSeriyy opened 4 years ago

GendalfSeriyy commented 4 years ago

Hello! I found that without weight normalization, the network ceases to learn, and the loss is equal to nan. Could you please explain why this is happening and how it can be fixed?

yhhhli commented 4 years ago

Hi,

When we first test our algorithm without weight normalization, we also find that problem. It seems that the gradients of the clipping parameter in several layers will explode suddenly. Then we tried to use small learning rates for the clipping parameter but we found the performance is not good.

Then, we think the problem is the distribution of weights is changing significantly and there are no heuristics that can tell when to increase the LR (to accommodate the shift of the distribution of the weights) or when to decrease the LR (to stabilize the training behavior). Therefore, we come up with a method to normalize weights. Weight normalization is inspired by Batch Normalization in activations, because we find learning the clipping parameter in activations quantization does not have the nan issue.

GendalfSeriyy commented 4 years ago

Thanks for the answer! We also noticed that when quantizing the first and last layer to the same accuracy as the other layers, the network also does not learn. To be more precise, network trains a certain number of epochs, but then the accuracy drops to 10% and no longer grows. Have you carried out such experiments?

yhhhli commented 4 years ago

I think weight normalization cannot be applied to the last layer because the output of the last layer is the output of the network, without BN to standardize its distribution. For the last layer, maybe you can apply the DoReFa scheme to quantize weights and our APoT quantization to activations.

wu-hai commented 2 years ago

Thanks for the great work and the clarification on the weight_norm! I want to ask that after applying weight normalization in the real-valued weight, the lr for \alpha should be the same for weight or add some adjustment on the lr and weight_decay on \alpha (like the settings in your commented code (https://github.com/yhhhli/APoT_Quantization/blob/a8181041f71f07cd2cefbfcf0ace05e47bb0c5d0/ImageNet/main.py#L181)?