Medical SAM Adapter, or say MSA, is a project to fineturn SAM using Adaption for the Medical Imaging. This method is elaborated on the paper Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation.
-image_size
. Credit: @shinning0821-net mobile_sam
. Additionally, you now have the flexibility to use ViT, Tiny ViT, and Efficient ViT as encoders. Check the details here. Credit: @shinning0821-mod
as sam_lora
.
A guidance can be found in here. Credit: @shinning0821-multimask_output
to the number of classes favored. Also updated REFUGE example to two classes (optic disc & cup). Credit: @LJQCN101guidance/Dataset.md
Credit: @shinning0821We've released a bunch of pre-trained Adapters for various organs/lesions in Medical-Adapter-Zoo. Just pick the adapter that matches your disease and easily adjust SAM to suit your specific needs π.
If you can't find what you're looking for. Please suggest it through any contact method available to us (GitHub issue, HuggingFace community, or Discord). We'll do our very best to include it.
Install the environment:
conda env create -f environment.yml
conda activate sam_adapt
Then download SAM checkpoint, and put it at ./checkpoint/sam/
You can run:
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth
mv sam_vit_b_01ec64.pth ./checkpoint/sam
creat the folder if it does not exist
Download ISIC dataset part 1 from https://challenge.isic-archive.com/data/. Then put the csv files in "./data/isic" under your data path. Your dataset folder under "your_data_path" should be like: ISIC/ ISBI2016_ISIC_Part1_Test_Data/...
ISBI2016_ISIC_Part1_Training_Data/...
ISBI2016_ISIC_Part1_Test_GroundTruth.csv
ISBI2016_ISIC_Part1_Training_GroundTruth.csv
You can fine the csv files here
Begin Adapting! run: python train.py -net sam -mod sam_adpt -exp_name *msa_test_isic* -sam_ckpt ./checkpoint/sam/sam_vit_b_01ec64.pth -image_size 1024 -b 32 -dataset isic -data_path *../data*
change "data_path" and "exp_name" for your own useage. you can change "exp_name" to anything you want.
You can descrease the image size
or batch size b
if out of memory.
Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.
Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.
In default, everything will be saved at ./logs/
REFUGE dataset contains 1200 fundus images with optic disc/cup segmentations and clinical glaucoma labels.
Dowaload the dataset manually from here, or using command lines:
git lfs install
git clone git@hf.co:datasets/realslimman/REFUGE-MultiRater
unzip and put the dataset to the target folder
unzip ./REFUGE-MultiRater.zip
mv REFUGE-MultiRater ./data
For training the adapter, run: python train.py -net sam -mod sam_adpt -exp_name REFUGE-MSAdapt -sam_ckpt ./checkpoint/sam/sam_vit_b_01ec64.pth -image_size 1024 -b 32 -dataset REFUGE -data_path ./data/REFUGE-MultiRater
you can change "exp_name" to anything you want.
You can descrease the image size
or batch size b
if out of memory.
This tutorial demonstrates how MSA can adapt SAM to 3D multi-organ segmentation task using the BTCV challenge dataset. For BTCV dataset, under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were randomly selected from a combination of an ongoing colorectal cancer chemotherapy trial, and a retrospective ventral hernia study. The 50 scans were captured during portal venous contrast phase with variable volume sizes (512 x 512 x 85 - 512 x 512 x 198) and field of views (approx. 280 x 280 x 280 mm3 - 500 x 500 x 650 mm3). The in-plane resolution varies from 0.54 x 0.54 mm2 to 0.98 x 0.98 mm2, while the slice thickness ranges from 2.5 mm to 5.0 mm. Target: 13 abdominal organs including Spleen Right Kidney Left Kidney Gallbladder Esophagus Liver Stomach Aorta IVC Portal and Splenic Veins Pancreas Right adrenal gland Left adrenal gland. Modality: CT Size: 30 3D volumes (24 Training + 6 Testing) Challenge: BTCV MICCAI Challenge The following figure shows image patches with the organ sub-regions that are annotated in the CT (top left) and the final labels for the whole dataset (right).
python train.py -net sam -mod sam_adpt -exp_name msa-3d-sam-btcv -sam_ckpt ./checkpoint/sam/sam_vit_b_01ec64.pth -image_size 1024 -b 8 -dataset decathlon -thd True -chunk 96 -data_path ../data -num_sample 4
It is simple to run MSA on the other datasets. Just write another dataset class following which in ./dataset.py
. You only need to make sure you return a dict with
{
'image': A tensor saving images with size [C,H,W] for 2D image, size [C, H, W, D] for 3D data.
D is the depth of 3D volume, C is the channel of a scan/frame, which is commonly 1 for CT, MRI, US data.
If processing, say like a colorful surgical video, D could the number of time frames, and C will be 3 for a RGB frame.
'label': The target masks. Same size with the images except the resolutions (H and W).
'p_label': The prompt label to decide positive/negative prompt. To simplify, you can always set 1 if don't need the negative prompt function.
'pt': The prompt. Should be the same as that in SAM, e.g., a click prompt should be [x of click, y of click], one click for each scan/frame if using 3d data.
'image_meta_dict': Optional. if you want save/visulize the result, you should put the name of the image in it with the key ['filename_or_obj'].
...(others as you want)
}
Welcome to open issues if you meet any problem. It would be appreciated if you could contribute your dataset extensions. Unlike natural images, medical images vary a lot depending on different tasks. Expanding the generalization of a method requires everyone's efforts.
[ ] Release Medical Adapter Zoo
@misc{wu2023medical,
title={Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation},
author={Junde Wu and Wei Ji and Yuanpei Liu and Huazhu Fu and Min Xu and Yanwu Xu and Yueming Jin},
year={2023},
eprint={2304.12620},
archivePrefix={arXiv},
primaryClass={cs.CV}
}