Project-MONAI / research-contributions

Implementations of recent research prototypes/demonstrations using MONAI.
https://monai.io/
Apache License 2.0
1.03k stars 336 forks source link

Implementation details of finetuning Swin UNETR on BTCV #94

Closed hanoonaR closed 2 years ago

hanoonaR commented 2 years ago

Hi, Thank you for sharing your great work. Could you please provide some clarity on the implementation details on the BTCV dataset for the Swin-UNETR.

1) Please clarify that the reported numbers on BTCV in the Self-Supervised Pre-Training of Swin Transformersfor 3D Medical Image Analysis paper are finetuned on the BTCV data from the pretrained model "Swin UNETR/Base".

2) Please share details on the batch size and learning rate used in finetuning for BTCV to get the reported numbers?

Thank you.

tangy5 commented 2 years ago

Hi @hanoonaR , thanks for the interest of the work.

  1. Yes, experiments are conducted and fine-tuned with the pre-trained model "swin UNETR/base". But note the number reported in the leaderboard is trained with additional training data. Please refer to Table 4 with a single fold, single model perforance.
  2. Batch size is 4 with single GPU, the model is trained with 4 x 32G GPUs. Initial learning rate is set to 1e-4 x [number of GPUs] in the DDP training.

Hope these helps. Thanks.