I figured out that the LR resets between training sessions so this rectifies that. In addition, I changed the patience to the default and provide the scheduler with absolute loss values because it only changes the LR if the loss stops decreasing but what we want is convergence towards zero (i.e., the generator is often a negative loss so we want it to increase not decrease).
An update on this front.
I figured out that the LR resets between training sessions so this rectifies that. In addition, I changed the patience to the default and provide the scheduler with absolute loss values because it only changes the LR if the loss stops decreasing but what we want is convergence towards zero (i.e., the generator is often a negative loss so we want it to increase not decrease).