lessw2020 / Ranger21

Ranger deep learning optimizer rewrite to use newest components
Apache License 2.0
323 stars 46 forks source link

lr below min_lr check too aggressive #16

Open kai-tub opened 3 years ago

kai-tub commented 3 years ago

Hi,

First of all, thank you for providing such an awesome optimizer and releasing an arXiv reference! I am still working on integrating the Optimizer into my project, but I am getting quite a few superfluous warnings:

error in warmdown - lr below min lr. current lr = 2.999999999999997e-05
auto handling but please report issue!

> min_lr = 3e-5

Which is caused by the following check:

 if new_lr < self.min_lr:

from here

Due to floating-point rounding errors, the new_lr might become lower than the predefined min_lr. I would suggest replacing this check with something like:

if (new_lr - self.min_lr) < - eps:

Which would be a simple fix, or a more sophisticated function similar to np.isclose

I am happy to make a PR if you'd like :)

JaheimLee commented 3 years ago

I also get this warning. But my issue is, when fine-tuning bert model , new_lr is always smaller than the sefl.min_lf.

TolerantChief commented 1 year ago

Did you ever fix this problem? I've got a similar issue and I don't know how to fix it. I did what you suggested and I no longer get that warning but I still get this one:

TRAIN;18;38.92721739355123;0.6500738552437223;91.424s TEST;18;39.037711521364606;0.6040189125295509;11.692s error in warmdown pct calc. new pct = 1.000928505106778 auto handled but please report issue error in warmdown pct calc. new pct = 1.0018570102135562 auto handled but please report issue error in warmdown pct calc. new pct = 1.0027855153203342 auto handled but please report issue