nyukat / GMIC

An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization
https://doi.org/10.1016/j.media.2020.101908
GNU Affero General Public License v3.0
168 stars 48 forks source link

Does this model need pixel-level segmentation masks of malignant and benign lesions? #10

Closed Steve-Pan closed 4 years ago

Steve-Pan commented 4 years ago

Hi, according to your paper, it seems that training and inference of this model only requires image-level labels, no need for annotations of malignant and benign lesions. However, from the code in this repo, segmentation paths of malignant and benign lesions are required to run the code. Just wonder, if I don't have segmentation masks of malignant and benign lesions, how can I train and test your model on my own images? Waiting and appreciate for your response.

seyiqi commented 4 years ago

Hi Steve,

You are correct that the model does not require segmentation for training.

The reason why this code repo contains segmentation is to generate visualizations that compare the saliency maps with the ground truth segmentation. You can either disable visualization in the run.sh file (unset --visualization-flag) or give random images as the ground-truth segmentation.

Hope it helps.

Steve-Pan commented 4 years ago

Hi Yiqiu,

Thanks much for your clarification. Now I understand how this code works.

Currently, we have a mammography image set, containing only image-level labels (cancer vs. no cancer) and coarse malignant lesion annotations on cancer images (no benign annotations). If we want to fine-tune your pretrained models on our own dataset for cancer vs. no cancer classification on image basis, do you have any suggestion?


From: Artie Shen notifications@github.com Sent: Wednesday, September 23, 2020 1:39 PM To: nyukat/GMIC GMIC@noreply.github.com Cc: Hong Pan mspanhong@hotmail.com; Author author@noreply.github.com Subject: Re: [nyukat/GMIC] Does this model need pixel-level segmentation masks of malignant and benign lesions? (#10)

Hi Steve,

You are correct that the model does not require segmentation for training.

The reason why this code repo contains segmentation is to generate visualizations that compare the saliency maps with the ground truth segmentation. You can either disable visualization in the run.sh file (unset --visualization-flag) or give random images as the ground-truth segmentation.

Hope it helps.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/nyukat/GMIC/issues/10#issuecomment-697371059, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AG674ICCVJK3L5WQTJCSGNLSHH3CJANCNFSM4RWTZJDQ.

seyiqi commented 4 years ago

Hi Steve,

Here is what I would do:

Hope this help :)

Hong-Swinburne commented 3 years ago

Hi Yiqiu,

Thanks for your valuable advice.

Actually, I can not find what loss function is used in the current code. It would be great appreciated if there is a chance to obtain the training code.

In order to fine tune the pretrained model, should I tune all global, local and fusion modules simultaneously or just tune the fusion module? Because each module may affect the performance of the final model, what is the proper way/order to tune this model and which parameters are particularly important in the fine-tuning process.

Any suggestion is more than welcome. Thank you.