rom1504 / clip-retrieval

Easily compute clip embeddings and build a clip retrieval system with them
https://rom1504.github.io/clip-retrieval/
MIT License
2.42k stars 213 forks source link

Add support for the full open clip model name format : ViT-B-32/laion2b_s34b_b79k #314

Closed mehdidc closed 10 months ago

mehdidc commented 1 year ago

This leaves the defaults the same, but allow the possibility to specify a checkpoint if needed (i.e, corresponds to the pretrained parameter of open clip).

mehdidc commented 1 year ago

I think, ideally there should be a dict mapping each model to the best available checkpoint (e.g., according to imagenet zero-shot classification results), in case checkpoint is not provided. If there is no mapping in the dict, the cli should throw an error. Not sure though which models were selected before, so this might however break the run if people update the repo with this change. Maybe better to force the user to provide the checkpoint in case of an open_clip: model, to avoid any ambiguity. The other thing is that openai models are also loadable with openclip, so we might not need actually to separate openclip and non openclip cases?

heyalexchoi commented 11 months ago

this would be nice. took awhile for me to figure out which open clip checkpoint actually made my embedding, and it seems the default for 'ViT-L-14' scored considerably lower than some of the other options.

rom1504 commented 10 months ago

this looks similar in intent to #284 but there is no doc nor test

rom1504 commented 10 months ago

I changed this to be like ViT-B-32/laion2b_s34b_b79k with no new arg clip checkpoint

that makes it cleaner imo and does not leak the openclip interface in the top level interface of clip retrieval

it allows for other clip model interfaces to have different model formats