Open zeinebBC opened 10 months ago
+1, I was trying to use val.py, but no luck. May need author's help.
According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?
According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?
Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.
Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.
In default, everything will be saved at ./logs/
According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?
Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.
Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.
In default, everything will be saved at ./logs/
Thank you for your reply. The details of the training process can indeed be seen in logs. However, besides that, I want to see the visual segmentation results performed with the trained model.
Thank you very much for your reply. It is true that the detailed records of the model training process can be found in logs. However, in addition, I also want to see the visual segmentation results.
In addition, I would like to ask you one more question.
I tried multi-class segmentation by setting the "-multimask_output" in the cfg.py file to 2, which worked successfully with the sam model, but the "ValueError: Target size (torch.Size([16, 2, 256, 256])) must be the same as input size (torch.Size([16, 1, 256, 256]))" problem appeared with efficient_sam.
All the best to you
-----原始邮件----- 发件人:"Junde Wu" @.> 发送时间:2024-03-09 05:33:51 (星期六) 收件人: KidsWithTokens/Medical-SAM-Adapter @.> 抄送: janexue001 @.>, Comment @.> 主题: Re: [KidsWithTokens/Medical-SAM-Adapter] How to use the code for Inference? (Issue #74)
According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??
Thank you very much for your reply. It is true that the detailed records of the model training process can be found in logs. However, in addition, I also want to see the visual segmentation results. In addition, I would like to ask you one more question. I tried multi-class segmentation by setting the "-multimask_output" in the cfg.py file to 2, which worked successfully with the sam model, but the "ValueError: Target size (torch.Size([16, 2, 256, 256])) must be the same as input size (torch.Size([16, 1, 256, 256]))" problem appeared with efficient_sam. All the best to you … -----原始邮件----- 发件人:"Junde Wu" @.> 发送时间:2024-03-09 05:33:51 (星期六) 收件人: KidsWithTokens/Medical-SAM-Adapter @.> 抄送: janexue001 @.>, Comment @.> 主题: Re: [KidsWithTokens/Medical-SAM-Adapter] How to use the code for Inference? (Issue #74) According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
You need to modify some part related to num_multimask_output in EfficientSAM following SAM's code.
I'm seeking clarity on utilizing the code during inference for testing a fine-tuned model on a dataset without target masks. Is there any guidance provided in the associated paper or repository on how to perform this task effectively? What prompting techniques could I employ when I don't have information regarding the target masks' locations? How can I evaluate the accuracy of the predicted masks in the absence of target masks?