Open thebes2 opened 2 years ago
Store the epoch number in the checkpoints. Restoring checkpoints currently restarts the training at epoch 0, which will not work with lr schedulers.
Can replace the existing flag by simply checking if the current epoch is 0.
Store the epoch number in the checkpoints. Restoring checkpoints currently restarts the training at epoch 0, which will not work with lr schedulers.