facebookresearch / segment-anything

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Apache License 2.0
46.68k stars 5.53k forks source link

Get Image Embedding Vector and Store it in DATABASE #740

Open cepa995 opened 4 months ago

cepa995 commented 4 months ago

Hi,

I am trying to get image embeddings from SAM and save them in a Database, but I am a bit confused with the embedding size.

For example: image

I see that size is [1, 256, 64, 64]. This is because of the batch size, correct? The embedding should be (1, 256), right?

How can I safely retrieve the correct embedding vector?

raoxinyu4977 commented 4 months ago

The input image has an original size of (1, 3, 1024, 1024). After passing through the VIT encoder (patch_size=16, out_channels=256), the output shape is (1, 256, 1024/16, 1024/16), which then reshapes to (1, 64*64, 256) during subsequent self-attention processing.

cepa995 commented 4 months ago

Aha, I see. So it is not typical image encoding into a single vector, such as CLIP does, but a matrix instead.

Does that mean that there is no use in using these image encodings to compute similarity between two images?

heyoeyo commented 4 months ago

Does that mean that there is no use in using these image encodings to compute similarity between two images?

The embeddings are always going to be the same shape, so if you wanted to treat them as a single vector to make comparison easier, they could always be combined. For example just by stacking them, averaging, taking the max of each feature etc. or maybe even building a model that merges them into a single vector for a specific use case.

Alternatively, the comparison could be made in 2D, to get a measure of spatial similarity between two images.