Open chuanlukk opened 2 weeks ago
It is possible to continue training an overfitted model.
In our experience, we didn't observe overfitting with this updating strategy. However, you can feel free to change this code to test if it improves performance. Thanks!
Question: When the learning rate is adjusted after early stopping, why does the code continue training from the latest saved model instead of reloading the best-performing model? Is this behavior intentional, or could it be a bug?