yhygao / CBIM-Medical-Image-Segmentation

A PyTorch framework for medical image segmentation
Apache License 2.0
256 stars 46 forks source link

inconsistency results between your nnFormer and the original nnFormer repo #28

Open Liiiii2101 opened 9 months ago

Liiiii2101 commented 9 months ago

Hi, Thanks for your excellent work. I have tried to run your models and also the other models you provided in this repo. But I found out there is a large inconsistency in the results between your models and the the original nnformer repo even with the same patch size, spacing and other same parameters. for you nnFormer, avg of 5-fold cross validation is around 0.5 DSC, but for original nnFormer it is 0.62. makes wonder did you somehow fine-tune only your medformer, and the results from other models are not fine-tuned?

Thanks a lot.

yhygao commented 9 months ago

The medformer in this repo is trained from scratch for all datasets without any pretraining weights. For nnFormer, I copied their original model code with very minor modifications to make it work in our repo. The performance difference between our repo and nnFormer repo might be because other training hyper-parameters, like lr, optimizer, epoch, etc. In my experience, nnFormer is sensitive to hyperparameters and needs special tuning in contrast to ResUNet or MedFormer. Some recent papers also have similar findings: https://arxiv.org/pdf/2304.03493.pdf. You might need to try other training hyper-parameters to see if they can match the performance in the original nnFormer repo.