Closed HadiHammoud44 closed 2 weeks ago
Dear hadi, many thanks for your attention to our work. You can find our checkpoints at https://github.com/Luffy03/Large-Scale-Medical.
After checking the SSL section, these are the checkpoints available. I should assume VoCo_B_SSL_head is the checkpoint used for the CVPR submission (for all datasets but BTCV)? But where can I find the chkpt that was pre-trained on BTCV and COVID alone?
Thanks for your help
Dear hadi, we sincerely hope you can compare our new version at https://github.com/Luffy03/Large-Scale-Medical, since it is much more powerful than the origninal version. If you insist using the old version, you can check it at https://www.dropbox.com/scl/fi/gatmukpmagzmi3xo9czd1/VoCo_cvpr.pt?rlkey=5dl5rcz8kex7c1tzi3p6muui3&st=ga8thstf&dl=0.
I appreciate your help. Indeed, I will compare with the latest version, but I am using the old version for purposes other than benchmarking. Just to be sure, the shared weights (on dropbox) are pretrained on BTCV and COVID only or also including LUNA?
Ok, for only BTCV and COVID, you can check VoCo_for_HadiHammoud44.pt, but we don't recommend you to use only these two datasets for pre-training.
Thank you very much for your prompt responses! Much appreciated
Hello, while checking the provided weights that were pretrained on the 3 datasets, i realized the existence of layers from unetr which serve as the decoder for downstream task. Specifically, I found among layers in the checkpoint the following
...
swinViT.layers4c.0.layer.conv1.conv.weight torch.Size([384, 384, 3, 3, 3])
swinViT.layers4c.0.layer.conv2.conv.weight torch.Size([384, 384, 3, 3, 3])
encoder1.layer.conv1.conv.weight torch.Size([48, 1, 3, 3, 3])
encoder1.layer.conv2.conv.weight torch.Size([48, 48, 3, 3, 3])
encoder1.layer.conv3.conv.weight torch.Size([48, 1, 1, 1, 1])
encoder2.layer.conv1.conv.weight torch.Size([48, 48, 3, 3, 3])
encoder2.layer.conv2.conv.weight torch.Size([48, 48, 3, 3, 3])
encoder3.layer.conv1.conv.weight torch.Size([96, 96, 3, 3, 3])
encoder3.layer.conv2.conv.weight torch.Size([96, 96, 3, 3, 3])
encoder4.layer.conv1.conv.weight torch.Size([192, 192, 3, 3, 3])
encoder4.layer.conv2.conv.weight torch.Size([192, 192, 3, 3, 3])
encoder10.layer.conv1.conv.weight torch.Size([768, 768, 3, 3, 3])
encoder10.layer.conv2.conv.weight torch.Size([768, 768, 3, 3, 3])
Are the encoder layers used while pretraining in any way? if not, are these random weights then?
Thank you for your assistance
Hi, these layers (from swinunetr) are also pre-trained.
Hello, many thanks for your impressive work!
Could you provide the pretraining checkpoints used in the original CVPR submission. I understood that there are two checkpoints, one for pretraining on BTCV and TCIA Covid-19 (for downstream experiments of BTCV), and another for pretraining on BTCV, TCIA Covid19, and LUNA.
My email: hadi.hammoud@epfl.ch