batmanlab / Mammo-CLIP

Official Pytorch implementation of MICCAI 2024 paper (early accept, top 11%) Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography
https://shantanu-ai.github.io/projects/MICCAI-2024-Mammo-CLIP/
Creative Commons Attribution 4.0 International
35 stars 11 forks source link

Image encoders besides EfficientNets for downstream classification tasks #16

Closed marianamourao-37 closed 2 months ago

marianamourao-37 commented 2 months ago

Hello :)

Congrats on the work developed!

I am interested in using pre-trained image encoders for downstream classification tasks. As I understand it, the available checkpoints are for EfficientNets. If I wanted to consider ResNets or Swin Transformer, would I need to first pre-train Mammo-CLIP with these image encoders using the train.py script?

Thanks in advance!

shantanu-ai commented 2 months ago

Hi @marianamourao-37, Thanks for taking interest in our paper. So, with the pre-trained weights, u get the pre-trained Efficientnet-B5 and B2 image encoders, respectively. If u want resnet or Swin transformer, you have to pre-train it from scratch with your data based on the readme.md file from our repo. Thanks

marianamourao-37 commented 2 months ago

Thanks for the quick reply.

Pre-train from scratch with my data means that it will not take into account the private dataset used in your work?

shantanu-ai commented 2 months ago

yes.