[ ] - In training repo, we should re-enable the logging of loss values during training (to plot loss vs epoch), including test/val loss. Additionally, make sure we log test IoU per class. It is helpful to plot these curves to check for signs of overfitting, in case the model needs to be modified/switched and retrained (plots might be needed if reviewers ask).
[ ] - Introduce a pipeline parameter for the sample number T, of the Monte Carlo Dropout procedure, atm it is hardcoded to 10 (as in reference).
[ ] - We agreed to remove all the direct nf-core references in the code, that is done. However, we could still change the module/process names, switching "nf-core" for "nf-root". We could also check how this ASCII art is generated when the pipeline runs, and maybe replace it with a new nf-root logo :)
[ ] - In training repo, we should re-enable the logging of loss values during training (to plot loss vs epoch), including test/val loss. Additionally, make sure we log test IoU per class. It is helpful to plot these curves to check for signs of overfitting, in case the model needs to be modified/switched and retrained (plots might be needed if reviewers ask).
[ ] - Introduce a pipeline parameter for the sample number T, of the Monte Carlo Dropout procedure, atm it is hardcoded to 10 (as in reference).
[ ] - We agreed to remove all the direct nf-core references in the code, that is done. However, we could still change the module/process names, switching "nf-core" for "nf-root". We could also check how this ASCII art is generated when the pipeline runs, and maybe replace it with a new nf-root logo :)