facebookresearch / segment-anything-2

The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Apache License 2.0
10.64k stars 849 forks source link

Medical-SAM2 is now Released 🥳! #153

Open jiayuanz3 opened 1 month ago

jiayuanz3 commented 1 month ago

We've successfully utilized the SAM 2 framework to address both 2D and 3D medical image segmentation tasks. Feel free to check paper Medical SAM 2 here 🤩 The Github code is also available here 😍

DM if you have any improvement suggestions and we can collaborate in the future 🫶

A big shout-out to my co-authors 🙌 @YunliQi @WuJunde

kassimi98 commented 1 month ago

"Does Med-Sam 2 follow the same training strategy (pre-training and full training) as SAM 2 for video segmentation?"

jiayuanz3 commented 1 month ago

That's a really good question! Our MedSAM-2 handles 2D and 3D medical image segmentation by treating them as videos. The train_3d.py follows the training procedure in SAM2 by processing 3D medical images as sequences of 2D slices. It's because contextual information from adjacent slices can enable accurate segmentation. The train_2d.py diverges from the original SAM2 (both image/video mode) by introducing the confidence memory bank.

kassimi98 commented 1 month ago

so in the train_3d.py we can find the pre-training for static images and full-training for video data ?

jiayuanz3 commented 1 month ago

Sorry, I may misunderstand your point before. The 3d code we provided is to fine-tune SAM2 model within the medical setting, by loading the SAM2 pre-trained weight. So it does not include the two steps you mentioned, but it includes all memory-relevant components mentioned in SAM2 paper. Hope that helps!

kassimi98 commented 1 month ago

thank you