ljwztc / CLIP-Driven-Universal-Model

[ICCV 2023] CLIP-Driven Universal Model; Rank first in MSD Competition.
Other
521 stars 58 forks source link

The problem of Dice loss #65

Closed inhaowu closed 1 month ago

inhaowu commented 5 months ago

Hi, thanks for your great work!

When I'm trying to run the code with my own dataset I encoutered a problem that the dice loss didn't decrease at all, while the bce_loss was decreasing as normal, which is the similar situation as in https://github.com/ljwztc/CLIP-Driven-Universal-Model/issues/56.

But I have something to clarify: to only run my own dataset which have differnet organs and tumors, I directly run the train.py with some modifications of my dataloders, where the label_transfer.py file is only used for generating the .h5 files of ground truth.

Could you please give me some hints why my dice loss didn't decrease?Thanks a lot!

ljwztc commented 5 months ago

I didn't meet the situation like this. Could you see the validation result?

inhaowu commented 5 months ago

I didn't meet the situation like this. Could you see the validation result?

The validation result shows the dice is 0, I think the reason is the dice loss is not decreasing. Maybe it’s the problem of my dataset? But I have tried nnUNnet before and I can get normal dice results.

ljwztc commented 5 months ago

Could you just try bce? I think that dice loss would not effect the training convergence

inhaowu commented 5 months ago

Could you just try bce? I think that dice loss would not effect the training convergence

Thank you for your advice. I tried only using the bce loss last night. The bce loss even dropped to 0, but the inference results showed the dice are still 0, and others are nan. It's really weird. Maybe I think the dice loss is crucial anyway.

ljwztc commented 1 month ago

The bug in the dice loss calculation has been addressed at this link. Consequently, we can now observe an expected decrease in the dice loss.