constantinpape / torch-em

Deep-learning based semantic and instance segmentation for 3D Electron Microscopy and other bioimage analysis problems based on pytorch.
MIT License
76 stars 22 forks source link

Add support to avoid overwriting trained models #411

Closed anwai98 closed 3 days ago

anwai98 commented 4 days ago

This PR adds optional support to the DefaultTrainer in favor of avoiding to train if the training is already completed. This is controlled via the fit method with the overwrite_training argument, which is set to True as default (i.e. the current action of the trainers). If this is set to False, it will check in the latest.pt checkpoint to verify if the training is finished or not. GTG from my side!

constantinpape commented 4 days ago

I will think about how this interacts with continuing training from an existing checkpoint.

anwai98 commented 3 days ago

Hi @constantinpape,

I took care of interactions between continuing training with overwriting trained models (i.e. raising error when overwrite_training is set to False and user passes a custom checkpoint to continue training)

GTG from my side!