Qsingle / LearnablePromptSAM

Try to use the SAM-ViT as the backbone to create the learnable prompt for semantic segmentation
Apache License 2.0
74 stars 13 forks source link

About one-shot and zero shot learning #15

Open YuxuanWen-Code opened 3 weeks ago

YuxuanWen-Code commented 3 weeks ago

Hi!

As reported in the technical report, promptSAM utilized one shot and zero-shot learning.

What is the implementation detail? To be more specific, which data are used for training? And for one-shot learning, how did you utilize the supporting one image?

Many thanks

Qsingle commented 2 weeks ago

Thank you for your question. We choose one sample from the training set of the dataset to train the model by one-shot learning. For example, we randomly choose one image and the mask from the IDRiD segmentation dataset and then evaluate the model at the test set. The zero-shot is to evaluate the model learned by the one-shot learning to other datasets (For example, trained at IDRiD, evaluated at DDR).