Closed overbestfitting closed 2 years ago
Hi, Authors,
Could you please reply my question/concerns ? Thank you!
Hi, I'm not the author, and I would like to ask where is the code of the pre-trained Swin-UNETR? I can't find it. Thank you very much.
Hi @overbestfitting ,
Thanks for the comment. There should have been a mistake about this.
Please check the latest pre-print (link) and leaderboard. Pre-trained Swin UNETR achieves an average Dice of 0.918 which outperform UNETR's 0.891. The difference is also shown in organ-wise dice comparisons.
Thanks
Dear Authors,
Thanks so much for the great work! I found these two papers from your group are quite interesting: [1]. UNETR: Transformers for 3D Medical Image Segmentation [2]. Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis
So, when I looked at the accuracy from Table I of [2] for both UNETR and Swin UNETR. It suggested that the accuracy increasing is pretty minor and some organs have excately the same DSC accuracy. For example, Rkid: 0.942, Lkid: 0.954, Aor: 0.948. It also suggested that the accuracy increasing are from Veins, Pan and AGs.
So, I downloaded the segmentations you have submitted to the BTCV website, named "UNETR_newVer1.zip" for UNETR and "3D_SSL_pretrain_swinTransformer_and_SwinUNETR_v2.zip" for pre-trained Swin-UNETR. Please correct me if I downloaded the wrong ones.
I did a DSC calculation for these two submissions for the 20 test sets. Interestingly, most of the DSC between those two submissions are 1. DSC of 1 means that the two organs are identical and requires pixel-leve match. So my question is how would this be possible given the segmentations are calculated from different models even there are 10-ensemble for the two different models?
I am looking forward to your reply and I appologize if I missed something.
Thanks !