Open upupming opened 2 years ago
Hi @upupming
I see. We trained the model using 4 nodes with 8 GPUs (total 32). We noticed scaling up the pre-training to multi-node is important for convergence.
Hi @ahatamiz, i meet the same problem ,as i pre-training the model on a single GPU with batch size 2.My input and ground-turth are shown in the image, but the output does not seem to have any resemblance to the original image, do you mean to train on multiple GPUs to have good results x1_aug x1_gt x1_recon
hI @GaoHuaZhang ,
Thanks, batch size should be a key point. We don't have the record for this training any more, but we have this https://github.com/Project-MONAI/tutorials/tree/main/self_supervised_pretraining which is a similar strategy pre-trianing, you could see loss curves with this tutorial.
Thanks
Hi @ahatamiz, i meet the same problem ,as i pre-training the model on a single GPU with batch size 2.My input and ground-turth are shown in the image, but the output does not seem to have any resemblance to the original image, do you mean to train on multiple GPUs to have good results x1_aug x1_gt x1_recon
Have you solved the problem? I have the same problem as you
Hi, Thanks for your great work on Swin-UNETR, I am trying to run pre-training on another dataset (~2000 CTs). But the loss curve seems not to be decreased:
Could you share your loss curve on the 5050 CTs dataset? Thank you very much!
I am pre-training the model on a single GPU with batch size 2.