This PR heavily improves the way we train the contrast-agnostic model. Earlier, a lot of important arguments were passed through the command line, making the arguments list ever increasing and messy. With this PR, all the important params (and hyperparams) are specified in .yaml file that could be used for training. This could help other segmentation projects looking to use monai-based models to improve upon or replace nnUNet model.
This improved training strategy is mainly motivated by #87 - a follow-up project looking at the model drift (using lifelong learning) when training the contrast-agnostic model by adding more datasets. This also makes the training easier for a foundational model for SC segmentation that we were discussing with @valosekj .
This PR heavily improves the way we train the contrast-agnostic model. Earlier, a lot of important arguments were passed through the command line, making the arguments list ever increasing and messy. With this PR, all the important params (and hyperparams) are specified in
.yaml
file that could be used for training. This could help other segmentation projects looking to use monai-based models to improve upon or replace nnUNet model.Usage is now as simple as:
This improved training strategy is mainly motivated by #87 - a follow-up project looking at the model drift (using lifelong learning) when training the contrast-agnostic model by adding more datasets. This also makes the training easier for a foundational model for SC segmentation that we were discussing with @valosekj .