juntang-zhuang / Adabelief-Optimizer

Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"
BSD 2-Clause "Simplified" License
1.05k stars 108 forks source link

0.1.0 changes for ranger_adabelief #19

Closed bratao closed 3 years ago

bratao commented 3 years ago

Hi @juntang-zhuang , super excited to try the new improvements. I saw that you did not updated the ranger version. Do you plan to add the improvements there too?

juntang-zhuang commented 3 years ago

Hi @bratao, thanks for asking. Currently don't have a plan for a big change with the ranger version, will fix some small errors. I don't have much experience with ranger version, also the decoupled weight decay and rectification is turned on by default in ranger (these two defaults are modified in adabelief-pytorch), except the eps is set as 1e-5 in ranger. Do you have a feeling what is a good default value for eps in ranger-adabelief? Or any other ideas on potential improvements?

dvolgyes commented 3 years ago

Hi,

I would recommend setting the default value of 'weight decoupling' to true. Adam originally had coupled weight decay, but after this paper, https://arxiv.org/abs/1711.05101 e.g. Pytorch introduced AdamW. But there is no supporting evidence for any case where coupled weight decay outperforms decoupled one, so even the default pytorch changed to decoupled. Actually, not just changed the default, they completely removed the option for the old one, making it backward incompatible. (So yes, now Adam and AdamW is almost the same in Pytorch, but in Adam the default decay is 0.)

If you know any counterexamples where coupling helps, please, show a link, but otherwise i strongly recommend the default to be set to True. (And obviously, for comparison the same weight decay scheme should be used.)

juntang-zhuang commented 3 years ago

@dvolgyes Thanks for feedback, actually for all latest implementations the default for weight_decouple is True. BTW, since you mentioned decoupled decay is enabled in Adam in PyTorch, I checked the source code it seems Adam does not have much change, am I missing something? Could you specify where in Adam of PyTorch is decoupled weight decay enabled? Thanks a lot

dvolgyes commented 3 years ago

Hi,

It depend how you see it. Check out the documentation of Adam between version 1.5.1 and 1.6: https://pytorch.org/docs/1.5.1/optim.html?highlight=adam#torch.optim.Adam https://pytorch.org/docs/1.6.0/optim.html?highlight=adam#torch.optim.Adam

The old one was: "Implements Adam algorithm. It has been proposed in Adam: A Method for Stochastic Optimization."

The new one: "Implements Adam algorithm. It has been proposed in Adam: A Method for Stochastic Optimization. The implementation of the L2 penalty follows changes proposed in Decoupled Weight Decay Regularization."

I did not go into the details checking how they implemented it, but I should have. Turned out, it is quite likely that the documentation is wrong, it suggests they updated it, but they didn't, and they still use the old version without weight decoupling. https://github.com/pytorch/pytorch/issues/42843

I still hold that I haven't seen any paper where the weight decoupling shows worse performance than the non-decoupled version, but I take back my Pytorch suggestion, I was mislead by the documentation.

So my current view:

juntang-zhuang commented 3 years ago

@dvolgyes Thanks a lot. It seems weird that the "source code" page for Adam in PyTorch 1,6 is

if group['weight_decay'] != 0:
       grad = grad.add(p, alpha=group['weight_decay'])

Then it's not decoupled weight decay. Quite weird, I guess the document page is wrong. But thanks for suggestion, the default weight_decouple is turned on.

dvolgyes commented 3 years ago

"Code never lies, comments sometimes do." -- Ron Jeffries :) Even worse, the bug was discovered before 1.7, but it might be that even 1.7.1 will be released without it. Anyway, you code was always correct, but I am glad that I could convince you to change the default.