TIGER-AI-Lab / VLM2Vec

This repo contains the code and data for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks"
https://tiger-ai-lab.github.io/VLM2Vec/
Apache License 2.0
85 stars 1 forks source link

Why use Precision@1 instead of Recall@K as a metric? #14

Open saicoco opened 3 days ago

saicoco commented 3 days ago

I want to compare the performance differences between VLM-vec, MM-Embed, and UniIR on retrieval task.

I just find that data for the retrieval task is the same in both MM-Embed and M-BEIR

XMHZZ2018 commented 2 days ago

@saicoco

Thank you for your interest! Yes, our current framework uses Precision@1 as a metric. We are actively working on adding additional metrics, including Recall and ranking-based metrics such as NDCG and MRR.