This repo is the official implementation for:\ SAMCT: Segment Any CT Allowing Labor-Free Task-Indicator Prompts.\ (The details of our SAMCT can be found in the models directory in this repo or the paper.)
π SAMCT supports two modes: interactive segmentation and automatic segmentation. (allowing both manual prompts and labor-free task-indicator prompts)\ π CT5M is a large CT dataset. (about 1.1M images and 5M masks covering 118 categories)\ π Excellent performance. (superior to both foundation models and task-specific models)
Following Segment Anything and SAMUS, python=3.8.16
, pytorch=1.8.0
, and torchvision=0.9.0
are used in SAMCT.
git clone https://github.com/xianlin7/SAMCT.git
cd SAMCT
conda create -n SAMCT python=3.8
conda activate SAMCT
pytorch=1.8.0
] and TorchVision [torchvision=0.9.0
].
(you can follow the instructions here)pip install -r requirements.txt
(* If you have already installed our SAMUS, you can skip steps 2-4 above, and activate the environment of SAMUS conda activate SAMUS
)
vit_b
version during training SAMCT.<class ID>/<dataset file folder name>/<image file name>
(Here, "class ID" represents the label value of each category in the indicated dataset. For example, the "class ID" of the spleen, right kidney, left kidney, gallbladder, and esophagus on the BTCV dataset should be 1, 2, 3, 4, and 5, respectively.)
Once you have the data ready, you can start training the model.
cd "/home/... .../SAMCT/"
python train.py --modelname SAMCT --task <your dataset config name>
python train_auto_prompt.py --modelname AutoSAMCT --task <your dataset config name>
Do not forget to set the load_path in ./utils/config.py before testing.
python testSAMCT.py --modelname SAMCT --task <your dataset config name>
python test.py --modelname AutoSAMCT --task <your dataset config name>
If our SAMCT is helpful to you, please consider citing:
@misc{lin2024samct,
title={SAMCT: Segment Any CT Allowing Labor-Free Task-Indicator Prompts},
author={Xian Lin and Yangyang Xiang and Zhehao Wang and Kwang-Ting Cheng and Zengqiang Yan and Li Yu},
year={2024},
eprint={2403.13258},
archivePrefix={arXiv},
primaryClass={cs.CV}
}