CASIA-IVA-Lab / FastSAM

Fast Segment Anything
GNU Affero General Public License v3.0
7.34k stars 682 forks source link

Spliting model into Encoder and Decoder #5

Open vietanhdev opened 1 year ago

vietanhdev commented 1 year ago

Hello! I really like this project. Do you plan to support splitting this model into Encoder and Decoder like the original SAM? In that way, the Decoder part can be run very fast, and we can apply it to some applications like AnyLabeling. I'd love to help integrate into AnyLabeling if we can find a way to split the model. Thank you very much!

vietanhdev commented 1 year ago

We can also convert the model to ONNX to remove the dependence on PyTorch, which has a very big size compared to ONNXRuntime. My code for the original SAM is here.

an-yongqi commented 1 year ago

Hello! I really like this project. Do you plan to support splitting this model into Encoder and Decoder like the original SAM? In that way, the Decoder part can be run very fast, and we can apply it to some applications like AnyLabeling. I'd love to help integrate into AnyLabeling if we can find a way to split the model. Thank you very much!

Our approach is divided into two phases: All-instance Segment and Prompt-guided Selection. The first stage can be considered as the encoder and the second stage as the decoder. The integration into AnyLabeling is also feasible. Thank you for your suggestion. We will package it as a SAM-like Encoder and Decoder in the near future.

an-yongqi commented 1 year ago

We can also convert the model to ONNX to remove the dependence on PyTorch, which has a very big size compared to ONNXRuntime. My code for the original SAM is here.

We have referred to the YOLOv8 to onnx tutorial to convert onnx, and the code to inference directly with onnx is planned to be released in the near future.

asizdzbest commented 1 year ago

Hello! I really like this project. Do you plan to support splitting this model into Encoder and Decoder like the original SAM? In that way, the Decoder part can be run very fast, and we can apply it to some applications like AnyLabeling. I'd love to help integrate into AnyLabeling if we can find a way to split the model. Thank you very much!

Our approach is divided into two phases: All-instance Segment and Prompt-guided Selection. The first stage can be considered as the encoder and the second stage as the decoder. The integration into AnyLabeling is also feasible. Thank you for your suggestion. We will package it as a SAM-like Encoder and Decoder in the near future.

Thanks. I'm waiting for it too.

YinglongDu commented 1 year ago

We have created a new branch to integrate AnyLabeling. We have divided the functionality into three functions: point_prompt, box_prompt, and text_prompt, which can be seen as decoders for the model. Could you provide a more detailed description of the specific functionalities we need to encapsulate?😊

vietanhdev commented 1 year ago

@YinglongDu

We can calculate the encoder as following:

image_embedding = run_encoder(image)

And calculate the decoder after that:

image = run_decoder(
      image_embedding,
      prompt,
)
YinglongDu commented 1 year ago

We have released the API for FastSAM. Please see for details at fastsam/decoder.py.

import model

from fastsam import FastSAM ,FastSAMDecoder
model = FastSAM('./weights/FastSAM.pt')
fastsam = FastSAMDecoder(model,device=DEVICE,retina_masks=True,imgsz=1024,conf=0.4,iou=0.9)

Encoder

image_embedding = fastsam.run_encoder(image)

Decoder

ann = fastsam.run_decoder(image_embedding,point_prompt=[[506, 340]], point_label=[1])
mario-dg commented 1 year ago

Is there any update on the onnx export part for both the decoder and encoder parts?

morestart commented 1 year ago

any update?

ggsDing commented 1 year ago

We have released the API for FastSAM. Please see for details at fastsam/decoder.py.

import model

from fastsam import FastSAM ,FastSAMDecoder
model = FastSAM('./weights/FastSAM.pt')
fastsam = FastSAMDecoder(model,device=DEVICE,retina_masks=True,imgsz=1024,conf=0.4,iou=0.9)

Encoder

image_embedding = fastsam.run_encoder(image)

Decoder

ann = fastsam.run_decoder(image_embedding,point_prompt=[[506, 340]], point_label=[1])

Thanks for the updates. However, it seems that the 'image_embeddings' are actually wrapped results including masks and boxes. Is it possible to get the intermediate results such as the encoded feature maps, such as that in the original SAM repository? THe encoded features will be more valuable for adapting to down-stream tasks.

Looking forward to your reply.