Closed JayOrten closed 3 months ago
This PR adds a parameter every_n_train_steps that tells Lightning to save a checkpoint every n steps. The benefit of this is that we don't have to checkpoint just once an epoch, which can be very long for a large dataset.
every_n_train_steps
This PR adds a parameter
every_n_train_steps
that tells Lightning to save a checkpoint every n steps. The benefit of this is that we don't have to checkpoint just once an epoch, which can be very long for a large dataset.