LAION-AI / CLIP_benchmark

CLIP-like model evaluation
MIT License
590 stars 75 forks source link

document how to run retrieval and generative eval using wds #91

Closed rom1504 closed 1 year ago

rom1504 commented 1 year ago

eg https://github.com/LAION-AI/CLIP_benchmark#coco-captions-example + https://huggingface.co/datasets/clip-benchmark/wds_mscoco_captions2017/tree/main

rom1504 commented 1 year ago

maybe something like clip_benchmark eval --dataset=mscoco_captions --dataset_root="https://huggingface.co/datasets/clip-benchmark/wds_{dataset_cleaned}/tree/main" --task=mscoco_generative --model=coca_ViT-L-14 --output=result.json --batch_size=256 --pretrained=model.pt

mehdidc commented 1 year ago

Yes the following works fine for retrieval:

clip_benchmark eval --dataset=wds/mscoco_captions --dataset_root="https://huggingface.co/datasets/clip-benchmark/wds_{dataset_cleaned}/tree/main" --task=zeroshot_retrieval --model=ViT-B-32 --output=result.json

for generative something similar with task mscoco_generative does not seem to work, will fix it in a PR and add docs for both.

mehdidc commented 1 year ago

Done