Past logic on restarting a training from an existing model worked as follows:
If epochNumber is not 1, then use opt.LR. However, if the user were to follow the 55-epoch recipe without using optimState (at the cost of losing internal states) he/she would be
stuck with LR=0.0 for the entire training process (the unset value).
With this commit a newRegime is generated whenever opt.LR is NOT
manually set, including cases where epochNumer is different from one.
Early checking on the value of opt.LR ensures backward compatibility.
Past logic on restarting a training from an existing model worked as follows:
If epochNumber is not 1, then use opt.LR. However, if the user were to follow the 55-epoch recipe without using optimState (at the cost of losing internal states) he/she would be stuck with LR=0.0 for the entire training process (the unset value).
With this commit a newRegime is generated whenever opt.LR is NOT manually set, including cases where epochNumer is different from one. Early checking on the value of opt.LR ensures backward compatibility.