Closed inhaowu closed 1 month ago
I didn't meet the situation like this. Could you see the validation result?
I didn't meet the situation like this. Could you see the validation result?
The validation result shows the dice is 0, I think the reason is the dice loss is not decreasing. Maybe it’s the problem of my dataset? But I have tried nnUNnet before and I can get normal dice results.
Could you just try bce? I think that dice loss would not effect the training convergence
Could you just try bce? I think that dice loss would not effect the training convergence
Thank you for your advice. I tried only using the bce loss last night. The bce loss even dropped to 0, but the inference results showed the dice are still 0, and others are nan. It's really weird. Maybe I think the dice loss is crucial anyway.
Hi, thanks for your great work!
When I'm trying to run the code with my own dataset I encoutered a problem that the dice loss didn't decrease at all, while the bce_loss was decreasing as normal, which is the similar situation as in https://github.com/ljwztc/CLIP-Driven-Universal-Model/issues/56.
But I have something to clarify: to only run my own dataset which have differnet organs and tumors, I directly run the train.py with some modifications of my dataloders, where the label_transfer.py file is only used for generating the .h5 files of ground truth.
Could you please give me some hints why my dice loss didn't decrease?Thanks a lot!