Open rom1504 opened 2 years ago
Just created a starting point for this feature: https://github.com/LAION-AI/CLIP_benchmark, so far it supports so far zero-shot classification and retrieval metrics (recall@K for text retrieval and for image retrieval) on few datasets from CLIP paper (will keep adding more). It can be run on pre-trained models supported OpenCLIP. From here it can be pretty easy to go through all pre-trained models / datasets and build a csv with all the results, once I add all the datasets I will do that.
https://github.com/cat-state/clip_benchmark to evaluate retrieval metrics
Will help compare fairly the various clip models that get trained and help researchers know what is valuable in their models and techniques