Closed NorbertZheng closed 1 year ago
In this story, Shake-Shake Regularization (Shake-Shake), by Xavier Gastaldi from London Business School, is briefly reviewed. The motivation of this paper is that data augmentation is applied at the input image,
It is found in prior art that adding noise to the gradient during training helps training and generalization of complicated neural networks. And
This is a paper in 2017 ICLR Workshop with over 10 citations. And the long version in 2017 arXiv has got over 100 citations.
Left: Forward training pass. Center: Backward training pass. Right: At test time.
With Shake-Shake Regularization, $\alpha$ is added:
$\alpha$ is set to $0.5$ during test time, just like Dropout.
26 2×32d ResNet (i.e. the network has a depth of 26, 2 residual branches and the first residual block has a width of 32) is used.
Error Rates of CIFAR-10.
And Shake-Shake-Image (S-S-I) obtains the best result for 26 2×64d ResNet and 26 2×96d ResNet.
Error Rates of CIFAR-100.
Using Shake at forward pass again improves the performance.
Particularly, Shake-Even-Image (S-E-I) is the best.
Correlation results on E-E-B and S-S-I models.
Layer-wise correlation between the first 3 layers of each residual block.
The summation at the end of the residual blocks forces an alignment of the layers on the left and right residual branches.
The correlation is reduced by the regularization.
Update Rules for $\beta$.
Left: Training curves (dark) and test curves (light) of models M1 to M5. Right: Illustration of the different methods in the above Table.
The further away $\beta$ is from $\alpha$, the stronger the regularization effect.
Error Rates of CIFAR-10.
With the simple yet novel idea and of course the positive results, it is published in 2017 ICLR Workshop which is very encouraging.
Sik-Ho Tang. Review: Shake-Shake Regularization (Image Classification).