Our learning rate scheduler currently uses a linear increase and exponential dropoff, so our learning rate curve looks like the following:
where the duration of the initial ramp-up and the decay are tuneable hyperparameters.
However, others pointed out that square ramp-up and square decay can perform significantly better, so we might also want to use them. The modified curve (orange) would look like the following:
Our learning rate scheduler currently uses a linear increase and exponential dropoff, so our learning rate curve looks like the following: where the duration of the initial ramp-up and the decay are tuneable hyperparameters.
However, others pointed out that square ramp-up and square decay can perform significantly better, so we might also want to use them. The modified curve (orange) would look like the following: