Closed karllandheer closed 5 months ago
I realized this would be hard to incorporate in the case of multi class, but since it gets one-hot-encoded anyways it doesn't seem like a fundamental problem, just a problem of how this gets input
Hi @karllandheer,
sorry for the late reply. Generally speaking, I would advocate for giving multiple ground truths as label maps for the same image. Depending on which loss you are using (e.g. soft-dice loss) the extension is non-trivial to non-binary labels.
If you create multiple instances of the same image due too multiple raters, it is advised to have the splits files in the preprocessed folders built accordingly where each image and its corresponding labels are only present in one split.
Best regards
Let's say you have >1 raters, which comes up fairly often. How do you incorporate their labels into nnUNet? One obvious way is to simply have all sets of training labels in there (in which case there will be duplicate input images). Another way would be to incorporate soft labels, obtained by simply averaging over the multi rater ground truths (i.e., for two raters it'd be 1 where they both agree on a label, 0.5 where one says a label and other says background, and 0 where they both say background). I don't know the "proper" way to do this, and maybe they actually result in nearly identical performance anyways? Anyone have any experience or suggestions with this.