batmanlab / Mammo-CLIP

Official Pytorch implementation of MICCAI 2024 paper (early accept, top 11%) Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography
https://shantanu-ai.github.io/projects/MICCAI-2024-Mammo-CLIP/
Creative Commons Attribution 4.0 International
35 stars 11 forks source link

Radiology text #8

Closed andy-li-cv closed 3 months ago

andy-li-cv commented 3 months ago

Hi, I have a couple of questions:

  1. Will you make the radiology texts publicly available?
  2. Did u include RSNA data during pre-training as well?
kayhan-batmanghelich commented 3 months ago

The private dataset will stay private. We are legally responsible. I leave the RSNA question to @shantanu.

Sent from mobile phone. Sorry for misspellings and abbreviations.

On Wed, Aug 7, 2024 at 12:16 PM Andy @.***> wrote:

Hi, I have a couple of questions:

  1. Will you make the radiology texts publicly available?
  2. Did u include RSNA data during pre-training as well?

— Reply to this email directly, view it on GitHub https://github.com/batmanlab/Mammo-CLIP/issues/8, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC53JXMYHVHY7AXHHWQTBW3ZQJB65AVCNFSM6AAAAABMEXYIYOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ2TGOBWGIYDQNY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

shantanu-ai commented 3 months ago

Hi @andy-li-cv, PFB the answers:

  1. No. We will not release any radiology texts. We released the model checkpoints, so if you want to finetune/evaluate, you can do that. We also released the code, sample csv files similar to the datasets (the texts are templated not actual ones), so u can pretrain yourself.
  2. No. RSNA dataset was only used for evaluation.
andy-li-cv commented 3 months ago

Thanks for quick answering. I have another question: How the Mammo-Factor is related to the Mammo-CLIP? What components are used in Mammo-Factor from Mammo-CLIP? Did you only use the templated texts by radiologist? Can i use the report from mammograms to localization?

shantanu-ai commented 3 months ago

Hi, Mammo-Factor needs Mammo-CLIP vision and text encoders. When u try to localize a finding, u need to pass the image and the text through the vision and text encoders for the embeddings and train the contrastive loss in the paper which will align the text embeddings to the particular finding, e.g, mass, calcification etc. For the text, u can use the templated text mentioned in the appendix of the paper (in prompt.json file in the codebase) or u can use real-life radiology texts for localization.