Closed Andry-Bal closed 3 years ago
Thank you for reporting this, @Andry-Bal. I believe simply adding a conditional statement after line 34 will fix this.
self.mnt_best = inf if self.mnt_mode == 'min' else -inf
self.early_stop = cfg_trainer.get('early_stop', inf)
if self.early_stop <= 0:
self.early_stop = inf
I'll make a PR after doing some test. (or I'll appreciate if you make one)
Sorry for late response on this and other issue(#79) you reported. That's because the version of template I'm currently using for most of my projects is on hydra_DDP
branch instead of master
branch.
No problem! Thank you for working on this project, it's really helpful.
In the README it is said about early stopping that 'This feature can be turned off by passing 0'. This is, however, will not work.
self.early
_stop will be set to 0 in: https://github.com/victoresque/pytorch-template/blob/85c55356547245b0718291019dfed89385e75e0f/base/base_trainer.py#L34 and later during training this condition will be true if current epoch was worse than previous: https://github.com/victoresque/pytorch-template/blob/85c55356547245b0718291019dfed89385e75e0f/base/base_trainer.py#L91 leading to the output: Validation performance didn't improve for 0 epochs. Training stops.