Closed peastman closed 1 year ago
I think there is an option for this in PyTorch, though I ahven't looked super closely. See is_better
in https://pytorch.org/docs/1.11/_modules/torch/optim/lr_scheduler.html#ReduceLROnPlateau.
Something to do with setting threshold
?
You can specify these options in the YAML with lr_scheduler_*
keys.
That's exactly what I need. These options aren't listed in https://github.com/mir-group/nequip/blob/main/configs/full.yaml. Do we need code changes to support them? Or will it automatically translate any lr_scheduler_X
option into the X
argument of the scheduler?
It should do it automatically; the configuration system is built to automatically propagate options from their hierarchical prefixes into the right objects. You can confirm this by running briefly in verbose: debug
mode which explicitly logs the mapping from input keys to various objects being built to see if the right values are set when ReduceLROnPlateau
gets instantiated (just grep ReduceLROnPlateau
in the log).
Oh my bad, of course, misread.
Setting lr_scheduler_threshold
works perfectly. Thanks! Can we document in configs/full.yaml
that you can specify any argument to the scheduler?
Done 👍
The ReduceLROnPlateau option actually looks for the loss to increase, not plateau. As long as it doesn't increase, the learning rate doesn't change.
In training my model I never see the loss increase. It just keeps decreasing by tinier and tinier amounts. Could we add a margin to the test, so for example I could tell it to reduce the learning rate any time the loss decreases by less than 2%?