Closed silvandeleemput closed 2 years ago
LGTM!
Regarding documentation, I think the easiest would be to either add a new Markdown file at the root of the repository or to add some documentation of the changes compared with the original repository at the top of the main Readme file.
Thijs Gelton also already added a custom trainer, which is currently not documented, see d5f32e7
Good suggestions, what do you think about including some documentation at the diag-nnunet repository as well since it will be an entrypoint for many users?
Yes good idea, I'll add some general information there about the customized nnUNet code, and more specific things can then be added afterwards.
I have added a new file for documenting the DIAG modifications and added a reference to it in the readme.md
Nice work! After merging, we need to add a new tag (1.7.0-5) and update the diag-nnunet repository.
Related to #7
This PR adds two new trainer classes to
nnunet/training/network_training/diag
:Both classes implement a similar simple training scheme that works with partially annotated segmentation data. Unlabeled segmentation data should be marked with a value of
-1
, inside the labels/ ground truth segmentation maps. These are just the default ground truth nnUNet data format files.The difference between the classes is that the first only will try to sample around annotated data, by randomly selecting an annotated voxel that would result in a valid patch (handles distance to border, minimal patch size, etc...), and that the second trainer will use the default sampling without regard for where the annotated voxels are situated.
Some caveats with the sampling around annotated data with the current implementation:
random_crop
argument doesn't do anything for the sampling, since it is assumed to be random within the annotated data anyway.@nlessmann I know that you are almost leaving, but could you have a look at this PR? I am also looking for suggestions for a good place to put documentation on how to use these classes.