Closed mehdidc closed 10 months ago
I think, ideally there should be a dict mapping each model to the best available checkpoint (e.g., according to imagenet zero-shot classification results), in case checkpoint is not provided. If there is no mapping in the dict, the cli should throw an error. Not sure though which models were selected before, so this might however break the run if people update the repo with this change. Maybe better to force the user to provide the checkpoint in case of an open_clip:
model, to avoid any ambiguity. The other thing is that openai models are also loadable with openclip, so we might not need actually to separate openclip and non openclip cases?
this would be nice. took awhile for me to figure out which open clip checkpoint actually made my embedding, and it seems the default for 'ViT-L-14' scored considerably lower than some of the other options.
this looks similar in intent to #284 but there is no doc nor test
I changed this to be like ViT-B-32/laion2b_s34b_b79k with no new arg clip checkpoint
that makes it cleaner imo and does not leak the openclip interface in the top level interface of clip retrieval
it allows for other clip model interfaces to have different model formats
This leaves the defaults the same, but allow the possibility to specify a checkpoint if needed (i.e, corresponds to the
pretrained
parameter of open clip).