Open zackchen-lb opened 2 years ago
HI @ZEKAICHEN , thanks for raising the issue. We've double checked and re-run the test.py using the code. If the used code is from https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV and use the Swin UNETR/Base model downloaded. It should give us the Dice score as below using overlap0.5:
Inference on case img0035.nii.gz
Mean Organ Dice: 0.7715836852979835
Inference on case img0036.nii.gz
Mean Organ Dice: 0.8377579306350628
Inference on case img0037.nii.gz
Mean Organ Dice: 0.8386162560902106
Inference on case img0038.nii.gz
Mean Organ Dice: 0.7809781930534572
Inference on case img0039.nii.gz
Mean Organ Dice: 0.8375578949580794
Inference on case img0040.nii.gz
Mean Organ Dice: 0.8275152177091785
Overall Mean Dice: 0.815668196290662
Could you provide more detailed of your implementation of the testing, we can help dig deep to the problem. Thanks!
Inference on case img0035.nii.gz Mean Organ Dice: 0.7166531048630541 Inference on case img0036.nii.gz Mean Organ Dice: 0.8339335328766061 Inference on case img0037.nii.gz Mean Organ Dice: 0.8132532298131311 Inference on case img0038.nii.gz Mean Organ Dice: 0.7672722978843951 Inference on case img0039.nii.gz Mean Organ Dice: 0.79168553306617 Inference on case img0040.nii.gz Mean Organ Dice: 0.8210711235933807 Overall Mean Dice: 0.7906448036827896
Hi there,
The provided model weights for BTCV (swinunetr-base) can't reproduce the same mean dice score on validation set. I only get a mean dice score around 0.16~0.2 which is far less than the given 0.8.
Basically I used the google colab codes as following:
The model has been loaded from the pretrained weights you provided as below and data transformation and data loader are set exactly the same as provided:
I wonder if I actually missed anything here, I appreciate for your feedback! Thanks.