Closed bhack closed 2 months ago
What is the best way to retrieve the LR for this optimizer at each train step to plot an LR curve without the LR scheduler?
Is it similar to: https://discuss.pytorch.org/t/get-current-lr-of-optimizer-with-adaptive-lr/24851/2
The LR doesn't change during optimization. It achieves convergence by averaging rather than decreasing the learning rate.
What is the best way to retrieve the LR for this optimizer at each train step to plot an LR curve without the LR scheduler?
Is it similar to: https://discuss.pytorch.org/t/get-current-lr-of-optimizer-with-adaptive-lr/24851/2