xmed-lab / TP-Mamba

9 stars 0 forks source link

Efficiently Adapting Vision Foundational Models on 3D Medical Image Segmentation 🚀

Official PyTorch implementation for our works on the topic of efficiently adapting the pre-trained Vision Foundational Models (VFM) on 3D Medical Image Segmentation task.

[1] "Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images" (MICCAI 2024)

🌊🌊🌊 News

💧 [2024-10-22] Re-organize and Upload partial core codes.

🔥🔥🔥 Contributions

We foucs on proposing more advanced adapters or training algorithms to adapt the pre-trained VFM (both natural and medical-specific models) on 3d medical image segmentation.

🔥 Data-Efficient: Use less data to achieve more competitive performance, such as semi-supervised, few-shot, zero-shot, and so on.

🔥 Parameter-Efficient: Enhance the representation by lightweight adapters, such as local-feature, global-feature, or other existing adapters.

🧰 Installation

🔨 TODO

⭐⭐⭐ Usage

💡 Supported Adapters Name Type Supported
Baseline (Frozen SAM) None ✔️
LoRA pixel-independent ✔️
SSF pixel-independent TODO
multi-scale conv local ✔️
PPM local TODO
Mamba global TODO
Linear Attention global TODO

📋 Results and Models

📌 TODO

📚 Citation

If you think our paper helps you, please feel free to cite it in your publications.

📗 TP-Mamba

@InProceedings{Wan_TriPlane_MICCAI2024,
        author = { Wang, Hualiang and Lin, Yiqun and Ding, Xinpeng and Li, Xiaomeng},
        title = { { Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15009},
        month = {October},
        page = {pending}
}

🍻 Acknowledge

We sincerely appreciate these precious repositories 🍺MONAI and 🍺SAM.