facebookresearch / schedule_free

Schedule-Free Optimization in PyTorch
Apache License 2.0
1.9k stars 64 forks source link

How do I retrive current LR from the optimizer now? #33

Closed pkpro closed 4 months ago

pkpro commented 4 months ago

It seems that self.optimizer.param_groups[0]['lr'] is static and always the same as it was initially set. Is there any possibility to see the current LR, or the whole point of schedule_free is to get rid of this indicator as not required?

adefazio commented 4 months ago

The schedule-free learning approach doesn't decrease the LR over time, it really is static throughout training. It achieves convergence by the use of averaging of the iterate sequence rather than decreasing step sizes.