lessw2020 / Ranger-Deep-Learning-Optimizer

Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase
Apache License 2.0
1.19k stars 176 forks source link

larger learning rate + large weight decay performs better? #18

Open askerlee opened 4 years ago

askerlee commented 4 years ago

Hi all, My colleague and I tried a combination of (relatively) large Ranger learning rate (say, 0.001) + large weight decay (say, 0.1). Seems the large decay leads to better performance? We tried two different models, and observed 0.5-1.5% increase of ImageNet classification accuracy, but both models were customized models, and not standard ones like Resnet. Not sure whether anyone else finds similar results.