This repo is the official implementation for:\ SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation.\ (The details of our SAMUS can be found in the models directory in this repo or in the paper.)
🏆 Low GPU requirements. (one 3090ti with 24G GPU memory is enough)\ 🏆 Large ultrasound dataset. (about 30K images and 69K masks covering 6 categories)\ 🏆 Excellent performance, especially in generalization ability.
Following Segment Anything, python=3.8.16
, pytorch=1.8.0
, and torchvision=0.9.0
are used in SAMUS.
git clone https://github.com/xianlin7/SAMUS.git
cd SAMUS
conda create -n SAMUS python=3.8
conda activate SAMUS
pip install -r requirements.txt
We use checkpoint of SAM in vit_b
version.
<class ID>/<dataset file folder name>/<image file name>
Once you have the data ready, you can start training the model.
cd "/home/... .../SAMUS/"
python train.py --modelname SAMUS --task <your dataset config name>
Do not forget to set the load_path in ./utils/config.py before testing.
python test.py --modelname SAMUS --task <your dataset config name>
If our SAMUS is helpful to you, please consider citing:
@misc{lin2023samus,
title={SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation},
author={Xian Lin and Yangyang Xiang and Li Zhang and Xin Yang and Zengqiang Yan and Li Yu},
year={2023},
eprint={2309.06824},
archivePrefix={arXiv},
primaryClass={cs.CV}
}