MIC-DKFZ / nnUNet

Apache License 2.0
5.71k stars 1.73k forks source link

Loss functions in nnunet #647

Closed Nandayang closed 1 year ago

Nandayang commented 3 years ago

Thanks for sharing the fantastic code! I have some questions about loss functions in nnUNet:

  1. what is GDL? Does it mean the generalized dice loss? It would be great if you can provide more details about GDL (paper link or something else).
  2. I'm not sure if I am correct when using the customized loss function. My way was to create a new loss function class in 'loss functions' fold and use it in a new 'nnUNetTrainer', did I miss something?

Best wishes

FabianIsensee commented 3 years ago

GDL is generalized dice loss but I am not sure whether it is implemented correctly Have a look at this repository by @Junma11 for inspiration about loss functions: https://github.com/JunMa11/SegLoss

I recommend starting with nnUNetTrainerV2, NOT nnUNetTrainer!

JunMa11 commented 3 years ago

Hi @FabianIsensee Thanks for referring to the SegLoss repo:)

@Nandayang Here are some loss examples with nnUNetTrainerV2 https://github.com/JunMa11/SegLoss/tree/master/test/nnUNetV2 I would recommend using compound loss functions.

FabianIsensee commented 3 years ago

By compoud you mean Dice + CE (the nnU-Net default)? I have not been able to confirm the performance of Dice Topk10. It works better on some datasets, but worse on others

JunMa11 commented 3 years ago

The "compound losses" mean combining different loss functions with submission, e.g., Dice+CE, Dice+TopK10, Dice+Focal. They are more robust than using single loss. However, as you mentioned, there is not a clear winner among the compound losses. Given a new segmentation task, I still do not find a principle to identify the best compound loss except doing experiments.