MedicineToken / Medical-SAM-Adapter

Adapting Segment Anything Model for Medical Image Segmentation
GNU General Public License v3.0
1.01k stars 89 forks source link

How to use the code for Inference? #74

Open zeinebBC opened 10 months ago

zeinebBC commented 10 months ago

I'm seeking clarity on utilizing the code during inference for testing a fine-tuned model on a dataset without target masks. Is there any guidance provided in the associated paper or repository on how to perform this task effectively? What prompting techniques could I employ when I don't have information regarding the target masks' locations? How can I evaluate the accuracy of the predicted masks in the absence of target masks?

FJGEODEV commented 8 months ago

+1, I was trying to use val.py, but no luck. May need author's help.

WuJunde commented 8 months ago
  1. you cannot evaluate the prediction accuracy without target masks (to my understanding, Ground-Truth). 2. SAM is an interactive model, so it is a common assumption that the user would provide a prompt for each image (like a click on the target object or sth). In the code, we generate this prompt from target mask instead to simulate the user-given prompt. If you have neither user-given prompt nor target-mask-generated prompt, you may want to try the "segment everything" setting described in SAM paper. It is basically click-prompted the original image in grid, and pick the top-k high-confidence predicted objects of the model. For using it, you need to train the adapters under this setting.
janexue001 commented 8 months ago

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

WuJunde commented 8 months ago

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.

Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.

In default, everything will be saved at ./logs/

janexue001 commented 8 months ago

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.

Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.

In default, everything will be saved at ./logs/

Thank you for your reply. The details of the training process can indeed be seen in logs. However, besides that, I want to see the visual segmentation results performed with the trained model.

janexue001 commented 8 months ago

Thank you very much for your reply. It is true that the detailed records of the model training process can be found in logs. However, in addition, I also want to see the visual segmentation results.

In addition, I would like to ask you one more question.

I tried multi-class segmentation by setting the "-multimask_output" in the cfg.py file to 2, which worked successfully with the sam model, but the "ValueError: Target size (torch.Size([16, 2, 256, 256])) must be the same as input size (torch.Size([16, 1, 256, 256]))" problem appeared with efficient_sam.

All the best to you

-----原始邮件----- 发件人:"Junde Wu" @.> 发送时间:2024-03-09 05:33:51 (星期六) 收件人: KidsWithTokens/Medical-SAM-Adapter @.> 抄送: janexue001 @.>, Comment @.> 主题: Re: [KidsWithTokens/Medical-SAM-Adapter] How to use the code for Inference? (Issue #74)

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

Part-Work commented 7 months ago

How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??

Issues-translate-bot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??

visionbike commented 1 month ago

Thank you very much for your reply. It is true that the detailed records of the model training process can be found in logs. However, in addition, I also want to see the visual segmentation results. In addition, I would like to ask you one more question. I tried multi-class segmentation by setting the "-multimask_output" in the cfg.py file to 2, which worked successfully with the sam model, but the "ValueError: Target size (torch.Size([16, 2, 256, 256])) must be the same as input size (torch.Size([16, 1, 256, 256]))" problem appeared with efficient_sam. All the best to you -----原始邮件----- 发件人:"Junde Wu" @.> 发送时间:2024-03-09 05:33:51 (星期六) 收件人: KidsWithTokens/Medical-SAM-Adapter @.> 抄送: janexue001 @.>, Comment @.> 主题: Re: [KidsWithTokens/Medical-SAM-Adapter] How to use the code for Inference? (Issue #74) According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

You need to modify some part related to num_multimask_output in EfficientSAM following SAM's code.