bowang-lab / U-Mamba

U-Mamba: Enhancing Long-range Dependency for Biomedical Image Segmentation
https://arxiv.org/abs/2401.04722
Apache License 2.0
652 stars 60 forks source link

problem in testing #18

Closed gumayusi3 closed 6 months ago

gumayusi3 commented 7 months ago

I can only get high dice when training in" fold all",but it doesn't work when testing. What should I do to find the best configuration? Just turning "(0,1,2,3,4)" into "all," in "find_best_configuration.py" and pythoning it don't work.

JunMa11 commented 7 months ago

Hi @gumayusi3 ,

We didn't train so many folds.

Did you test this function with nnunet? If it works for nnunet, it should also work for u-mamba.

gumayusi3 commented 7 months ago

Thank you,I have solved this problem. The reason of low dice is that dataset is too small.I get high dice just because training dataset is same as validation when training in "fold all" .

yanfangHao commented 4 months ago

Hi, I'm having the same problem. May I ask, what is the number of datasets before and after your expansion? What are the dice values after expansion? Thank you very much!

gumayusi3 commented 3 months ago

The dataset had about 50 plots before augmentation, and after augmentation to 1000 plots, Dice increased from about 0.3 to about 0.7, but still far from the theoretical optimum