bowang-lab / MedSAM

Segment Anything in Medical Images
https://www.nature.com/articles/s41467-024-44824-z
Apache License 2.0
2.85k stars 385 forks source link

MedSAM

This is the official repository for MedSAM: Segment Anything in Medical Images.

Welcome to join our mailing list to get updates.

News

Installation

  1. Create a virtual environment conda create -n medsam python=3.10 -y and activate it conda activate medsam
  2. Install Pytorch 2.0
  3. git clone https://github.com/bowang-lab/MedSAM
  4. Enter the MedSAM folder cd MedSAM and run pip install -e .

Get Started

Download the model checkpoint and place it at e.g., work_dir/MedSAM/medsam_vit_b

We provide three ways to quickly test the model on your images

  1. Command line
python MedSAM_Inference.py # segment the demo image

Segment other images with the following flags

-i input_img
-o output path
--box bounding box of the segmentation target
  1. Jupyter-notebook

We provide a step-by-step tutorial on CoLab

You can also run it locally with tutorial_quickstart.ipynb.

  1. GUI

Install PyQt5 with pip: pip install PyQt5 or conda: conda install -c anaconda pyqt

python gui.py

Load the image to the GUI and specify segmentation targets by drawing bounding boxes.

https://github.com/bowang-lab/MedSAM/assets/19947331/a8d94b4d-0221-4d09-a43a-1251842487ee

Model Training

Data preprocessing

Download SAM checkpoint and place it at work_dir/SAM/sam_vit_b_01ec64.pth .

Download the demo dataset and unzip it to data/FLARE22Train/.

This dataset contains 50 abdomen CT scans and each scan contains an annotation mask with 13 organs. The names of the organ label are available at MICCAI FLARE2022.

Run pre-processing

Install cc3d: pip install connected-components-3d

python pre_CT_MR.py

Training on multiple GPUs (Recommend)

The model was trained on five A100 nodes and each node has four GPUs (80G) (20 A100 GPUs in total). Please use the slurm script to start the training process.

sbatch train_multi_gpus.sh

When the training process is done, please convert the checkpoint to SAM's format for convenient inference.

python utils/ckpt_convert.py # Please set the corresponding checkpoint path first

Training on one GPU

python train_one_gpu.py

If you only want to train the mask decoder, please check the tutorial on the 0.1 branch.

Acknowledgements

Reference

@article{MedSAM,
  title={Segment Anything in Medical Images},
  author={Ma, Jun and He, Yuting and Li, Feifei and Han, Lin and You, Chenyu and Wang, Bo},
  journal={Nature Communications},
  volume={15},
  pages={1--9},
  year={2024}
}