MrGiovanni / SuPreM

[ICLR 2024 Oral] Supervised Pre-Trained 3D Models for Medical Image Analysis (9,262 CT volumes + 25 annotated classes)
https://www.cs.jhu.edu/~alanlab/Pubs23/li2023suprem.pdf
Other
240 stars 8 forks source link

Loading suite of Pre-trained Models in SuPreM #3

Closed HashmatShadab closed 9 months ago

HashmatShadab commented 9 months ago

Dear authors,

Thank you for your excellent work on this project and for providing pretrained weights for each backbone.

Currently, it's unclear how to load the provided pretrained weights successfully. The documentation lacks instructions regarding this. Specifically, I am unsure about any modifications that might be needed for the backbone architecture to appropriately adjust them for the pretrained weights. Is there a particular configuration required for each backbone, distinct steps for loading weights, or any other relevant information that might be crucial?

I would greatly appreciate it if you could point me in the right direction, or perhaps consider expanding the existing documentation to include a guide on this aspect.

MrGiovanni commented 9 months ago

Hi @HashmatShadab

Thanks for your interest in our work.

To load the pre-trained weights and use them for downstream task, you can refer to our example at https://github.com/MrGiovanni/SuPreM/blob/main/target_applications/totalsegmentator/README.md

This example can reproduce the transfer learning experiment on the TotalSegmentator dataset (reported in our paper).

For using other pre-trained weights/backbones (released by other teams), we will update a more detailed instruction in the next few weeks.

Best, Zongwei

HashmatShadab commented 9 months ago

Thanks for clarifying.

in the train script for the argument https://github.com/MrGiovanni/SuPreM/blob/931583bc3ee0bcf894e37f015c8671c21e2829ab/target_applications/totalsegmentator/train.py#L96 which weights are being loaded?

for other backbone names, i am able to load the weights provided in repo:

model_backbone == 'ssl': self_supervised_nv_swin_unetr_50000.pth
model_backbone == 'swinunetr': upervised_suprem_swinunetr_2100.pth
model_backbone == 'selfswinunetr': self_supervised_nv_swin_unetr_5050.pt
MrGiovanni commented 9 months ago

Hi @HashmatShadab

if args.model_backbone == "selfswin":

This is for our internal check. We reproduced Swin UNETR on 5,050 CT scans by ourselves and then compared our weights with those officially released by NV. We found that we had no problem reproducing their results and even achieved better transfer learning performance than the official weights.

More benchmark details can be found in Appendix Table 8 of our paper.

image

We did not release these reproduced weights because they are not our contribution. But if you are interested, you could download these weights from this Dropbox link.

Best, Zongwei

HashmatShadab commented 9 months ago

Thanks a lot!