openmedlab / MedLSAM

MedLSAM: Localize and Segment Anything Model for 3D Medical Images
Apache License 2.0
485 stars 19 forks source link

Suggestion - Integrate MobileSAM into the pipeline for lightweight and faster inference #1

Open mdimtiazh opened 1 year ago

mdimtiazh commented 1 year ago

Reference: https://github.com/ChaoningZhang/MobileSAM

Our project performs on par with the original SAM and keeps exactly the same pipeline as the original SAM except for a change on the image encode, therefore, it is easy to Integrate into any project.

MobileSAM is around 60 times smaller and around 50 times faster than original SAM, and it is around 7 times smaller and around 5 times faster than the concurrent FastSAM. The comparison of the whole pipeline is summarzed as follows:

image

image

Best Wishes,

Qiao

LWHYC commented 1 year ago

Hi Qiao,

Thank you for your interest in our project and the detailed comparison with MobileSAM. We appreciate your suggestion and acknowledge the impressive performance of MobileSAM.

We are planning to conduct more comprehensive comparisons, including MobileSAM, in our future work. This will give us a better understanding of how our project stacks up against other state-of-the-art solutions.

Thanks again for bringing this to our attention. Please stay tuned for our future updates.

Best regards, Wenhui