Should we wait (in succession) before the model stops training? Right now, regardless of at which epoch where there is no improvement in the loss between epochs, the self.wait parameter still increases +1. This can be a problem since, if these losses are sporadic (aka do not occur in succession), we might be prematurely stopping the training when there still performance to be gained on the loss.
Should we wait (in succession) before the model stops training? Right now, regardless of at which epoch where there is no improvement in the loss between epochs, the
self.wait
parameter still increases +1. This can be a problem since, if these losses are sporadic (aka do not occur in succession), we might be prematurely stopping the training when there still performance to be gained on the loss.