mileswyn / SAMIHS

Code for our Paper "SAMIHS: Adaptation of Segment Anything Model for Efficient Intracranial Hemorrhage Segmentation".
MIT License
26 stars 1 forks source link

paper对比实验中的询问 #2

Closed 21-10-4 closed 7 months ago

21-10-4 commented 7 months ago

image 我是一名初学者,我很好奇论文中不同method进行比较时,哪些是基于BCIHM、Instance数据集重新训练的,哪些又是直接采用原作者提供的权重的? 像SAM,它的图片输入是1024,而SAMUS的图片输入是512,这些配置会如何处理? 还有不同的method,它们的一些学习率等超参,warmup和优化器等训练策略会保持一致吗? 非常期待您的回复

mileswyn commented 7 months ago

Thanks for your interest~ U-Net, Att-UNet, U-Net++, and H2former, are trained from scratch. TransUNet and TransFuse are trained with the checkpoints provided by their authors. SAM and MedSAM are directly tested without fine-tuning. SAMed, SAMUS, MSA are fine-tuned following their code repos and the descriptions of their papers.

As for the size of input image, you could follow the instructions and codes of SAM and SAMUS. We compared these methods with the lr and optimizers claimed in their papers. If you have further issues about our work, welcome to add comments or send emails to me. (wangyinuo@buaa.edu.cn)

21-10-4 commented 7 months ago

非常感谢你的回复~