Closed Nandayang closed 1 year ago
GDL is generalized dice loss but I am not sure whether it is implemented correctly Have a look at this repository by @Junma11 for inspiration about loss functions: https://github.com/JunMa11/SegLoss
I recommend starting with nnUNetTrainerV2, NOT nnUNetTrainer!
Hi @FabianIsensee Thanks for referring to the SegLoss repo:)
@Nandayang Here are some loss examples with nnUNetTrainerV2 https://github.com/JunMa11/SegLoss/tree/master/test/nnUNetV2 I would recommend using compound loss functions.
By compoud you mean Dice + CE (the nnU-Net default)? I have not been able to confirm the performance of Dice Topk10. It works better on some datasets, but worse on others
The "compound losses" mean combining different loss functions with submission, e.g., Dice+CE, Dice+TopK10, Dice+Focal. They are more robust than using single loss. However, as you mentioned, there is not a clear winner among the compound losses. Given a new segmentation task, I still do not find a principle to identify the best compound loss except doing experiments.
Thanks for sharing the fantastic code! I have some questions about loss functions in nnUNet:
Best wishes