xianlin7 / SAMCT

MIT License
12 stars 1 forks source link

SAMCT

This repo is the official implementation for:\ SAMCT: Segment Any CT Allowing Labor-Free Task-Indicator Prompts.\ (The details of our SAMCT can be found in the models directory in this repo or the paper.)

Highlights

πŸ† SAMCT supports two modes: interactive segmentation and automatic segmentation. (allowing both manual prompts and labor-free task-indicator prompts)\ πŸ† CT5M is a large CT dataset. (about 1.1M images and 5M masks covering 118 categories)\ πŸ† Excellent performance. (superior to both foundation models and task-specific models)

Installation

Following Segment Anything and SAMUS, python=3.8.16, pytorch=1.8.0, and torchvision=0.9.0 are used in SAMCT.

  1. Clone the repository.
    git clone https://github.com/xianlin7/SAMCT.git
    cd SAMCT
  2. Create a virtual environment for SAMCT and activate the environment.
    conda create -n SAMCT python=3.8
    conda activate SAMCT
  3. Install Pytorch [pytorch=1.8.0] and TorchVision [torchvision=0.9.0]. (you can follow the instructions here)
  4. Install other dependencies.
    pip install -r requirements.txt

    (* If you have already installed our SAMUS, you can skip steps 2-4 above, and activate the environment of SAMUS conda activate SAMUS)

    Checkpoints

    • We use checkpoint of SAM in vit_b version during training SAMCT.
    • The checkpoint of SAMCT trained on CT5M will be released in the future 🌝.

Data

Citation

If our SAMCT is helpful to you, please consider citing:

@misc{lin2024samct,
      title={SAMCT: Segment Any CT Allowing Labor-Free Task-Indicator Prompts}, 
      author={Xian Lin and Yangyang Xiang and Zhehao Wang and Kwang-Ting Cheng and Zengqiang Yan and Li Yu},
      year={2024},
      eprint={2403.13258},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}