BRIO-lab / LitJTML

Using Pytorch Lightning and WandbLogger for our JTML neural network segementation code
1 stars 5 forks source link

Stop Training When Validation Loss Increases #9

Open sasank-desaraju opened 2 years ago

sasank-desaraju commented 2 years ago

Implement stopping training when val loss increases. I think we can do this through Callbacks. Maybe create a new callback class in the callbacks.py file so we can toggle whether to use this feature or not? Or just put it in the sole extant class bc "we will always use this".