xmed-lab / DHC

MICCAI 2023: DHC: Dual-debiased Heterogeneous Co-training Framework for Class-imbalanced Semi-supervised Medical Image Segmentation
MIT License
50 stars 2 forks source link

about DC loss #1

Closed BJQ123456 closed 1 year ago

BJQ123456 commented 1 year ago

image Could you please tell me why the dc here is multiplied by negative one during training, which will result in a negative loss function?  Thank you for your answer

McGregorWwww commented 1 year ago

Generally, Dice score is a value ranges from [0, 1] to measure the segmentation accuracy, so it is positive. However, for training, the loss should be minimized, so we turn the dice to negative to obtain the Dice loss. By minimizing this loss, the Dice score can be maximized.

BJQ123456 commented 1 year ago

Thank you very much for your reply, but as far as I know, DC loss is 1-dc, should this be changed here?Moreover, the loss will become negative due to the increase of weight after I change it to 1-dc. May I ask why this is?  Looking forward to your reply very much,Another one is do you scale the number of layers directly when converting nii to npy? Will this make a difference?

McGregorWwww commented 1 year ago

Hello,

  1. for the Dice loss, there is no major difference between 1-dc and -dc since 1 is a constant;
  2. in this work we use weighting strategy in the loss func, hence the sum of dc of all classes is greater than 1, thus 1-dc is negative;
  3. if you mean the spacing of the z-axis, yes, but the number of layers are the same, details of the preprocessing can be found in code/data/preprocess.py. In my opinion, considering the spacing will not make difference, maybe you can try it.