This PR adds a table to the public_embedding_benchmarks/multi_modal_evals.md file, presenting the performance of various models on different benchmarks:
The table includes models such as clip-vit-base-patch16, nomic-embed-vision-v1.5, and embed-multilingual-v3.0.
It provides their respective dimensions and performance metrics on benchmarks like BEIR Average, Coco (2017), Flickr1k, and more.
The "Total Average" column offers an aggregated view of the models' performance.
A note below the table specifies that all evaluations were calculated using Recall@10, and certain benchmarks are composites of evaluations gathered and annotated by Cohere.
This PR adds a table to the
public_embedding_benchmarks/multi_modal_evals.md
file, presenting the performance of various models on different benchmarks:clip-vit-base-patch16
,nomic-embed-vision-v1.5
, andembed-multilingual-v3.0
.Recall@10
, and certain benchmarks are composites of evaluations gathered and annotated by Cohere.