embeddings-benchmark / leaderboard

Code for the MTEB leaderboard
https://hf.co/spaces/mteb/leaderboard
9 stars 6 forks source link

Automate updating results from the `results` repository #17

Closed Samoed closed 1 month ago

Samoed commented 1 month ago

Hi! I wanted to add results from the results repository and possibly update them automatically (or at least add Russian models and tasks). I tried to find a way to convert mteb results to external model results. Is there any script to do that? Also, how do I add models? Should I add their parameters myself, or is there a script for that? I found a model_size script in utils, but its output is a bit different from what's in model_meta.

orionw commented 1 month ago

Yes I think we have scripts for these, depending on what you mean.

If you want to add model results that aren't found on the models HF page, you can store them in results and update the model_meta. If you have a model whose scores are attached to a HF model and have the mteb tag, it should pull those results automatically. EDIT: it should pull them automatically with the refresh.py file through Github actions, which have been failing (thanks again for the fix). It should start pulling them tomorrow if you put new results.

This may not have answered your question though so please let me know!

Samoed commented 1 month ago

Oh, sorry. I missed somehow that results are pulled from results repo. Thank you very much!