embeddings-benchmark / mteb

MTEB: Massive Text Embedding Benchmark
https://arxiv.org/abs/2210.07316
Apache License 2.0
1.95k stars 272 forks source link

Finalizing MMTEB #784

Closed KennethEnevoldsen closed 1 week ago

KennethEnevoldsen commented 5 months ago

This issue is to get an overview of what needs to be done before MMTEB can be finalized.

  1. Adding the last remaining datasets, notably:
    • [x] #641 #830
    • [x] #718
    • [x] #642 #833
  2. Speeding up the benchmark
    • [x] I believe we are only missing: #660
    • see also #836
    • see also #838
    • see also #835
  3. 705 (partly depends on 1, 2 as well as #879)

  4. Figuring out #752 (partly depends on 3)
  5. Deciding on meaningful benchmark subsets (depends on 3)
    • see #837
  6. 896 (depends on 3, 4 and 5) (see also #595)

    • see #839
  7. Updating leaderboard to new format https://github.com/embeddings-benchmark/mteb/discussions/674 (depends on 3-6)

Is there anything else that is needed?

vaibhavad commented 5 months ago

Construction of MMTEB-Lite? It will be a faster version of MMTEB. Two approaches that come to mind for implementing this are -

  1. Reducing the size document set of some retrieval benchmarks.
  2. Reducing the number of tasks
dokato commented 5 months ago

Hey @KennethEnevoldsen I'd like to merge also this dataset in #773. 3 reasons: a) we don't seem to have brazilian dialect represeted, b) multilabel task doesn't have large language coverage c) I had it prepared for long time, but multilabel task got only merged last week when I was away. We only need to address a problem with stratification of the splits there.

KennethEnevoldsen commented 5 months ago

@vaibhavad yes, def. we need to construct the benchmarks and ideally think about downsampling some of the larger retrieval datasets. A solution might be to implement a downsample function for retrieval tasks.

Thanks @dokato - let us get it merged in as well. Looks to be in a reasonable state

Ruqyai commented 5 months ago

Hey @KennethEnevoldsen I read the list and I think I can help in Running models https://github.com/embeddings-benchmark/mteb/discussions/705

jordiclive commented 5 months ago

@KennethEnevoldsen Is there anything meaningful new contributors can help with?

KennethEnevoldsen commented 5 months ago

Hi @jordiclive! I believe there are multiple avenues to take, but any of the outlines paper segment I believe is meaning (see the updated post above), implementing model (see e.g. #845, will finish it up either Monday or in the weekend), or starting work on 8)

bwanglzu commented 2 weeks ago

quick question: is there a script to select & run all MMTEB tasks? I'm a bit unclear about the difference between current development progress and how MMTEB is different from the current MTEB (in different languages).

Best

Bo

Samoed commented 2 weeks ago

@bwanglzu You can select benchmark like this:

import mteb
mteb.get_benchmark("MTEB(eng, classic)") # or get_benchmarks

Full list of bencharks here

KennethEnevoldsen commented 1 week ago

Will close this issue as MMTEB has been submitted, moving the public preprint release over to https://github.com/embeddings-benchmark/mteb/issues/1405