Closed rakshitraj closed 3 years ago
You can create a condition early stopping
function by setting up a threshold and monitor the delta in the training loss. When the training loss is smaller than the threshold, you can stop the training loop. Here's the implementation example from StackOverflow. Or, you can also use some third-party libraries that support early stopping
function (see example here).
Please note that the validation loss shows a general trend of increasing. Obviously, it reduces from one epoch to the next, but the overall validation loss across a few epochs is infact, increasing.
In the notebook MNIST-MLP-with-validation the validation loss does not show a clear trend of decreasing through the epochs. What then, should be the ideal number of epochs to stop training?
Refer to the loss vs. epochs plot below:
Refer to the training output below: