Closed andy-li-cv closed 3 months ago
The private dataset will stay private. We are legally responsible. I leave the RSNA question to @shantanu.
Sent from mobile phone. Sorry for misspellings and abbreviations.
On Wed, Aug 7, 2024 at 12:16 PM Andy @.***> wrote:
Hi, I have a couple of questions:
- Will you make the radiology texts publicly available?
- Did u include RSNA data during pre-training as well?
— Reply to this email directly, view it on GitHub https://github.com/batmanlab/Mammo-CLIP/issues/8, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC53JXMYHVHY7AXHHWQTBW3ZQJB65AVCNFSM6AAAAABMEXYIYOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ2TGOBWGIYDQNY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Hi @andy-li-cv, PFB the answers:
Thanks for quick answering. I have another question: How the Mammo-Factor is related to the Mammo-CLIP? What components are used in Mammo-Factor from Mammo-CLIP? Did you only use the templated texts by radiologist? Can i use the report from mammograms to localization?
Hi, Mammo-Factor needs Mammo-CLIP vision and text encoders. When u try to localize a finding, u need to pass the image and the text through the vision and text encoders for the embeddings and train the contrastive loss in the paper which will align the text embeddings to the particular finding, e.g, mass, calcification etc. For the text, u can use the templated text mentioned in the appendix of the paper (in prompt.json file in the codebase) or u can use real-life radiology texts for localization.
Hi, I have a couple of questions: