McGill-NLP / llm2vec

Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'
https://mcgill-nlp.github.io/llm2vec/
MIT License
1.32k stars 96 forks source link

Test llama and Mistral on mteb benchmark #111

Closed NouamaneELGueddarii closed 4 months ago

NouamaneELGueddarii commented 4 months ago

Hello, i am trying to test the llama Modela and Mistral on the german clustering benchmark on MTEB, how can i do it ?
the code base of mteb just changed, the encode function requires a new prompt_name which is not implemented in the llm2vec encode function.

vaibhavad commented 4 months ago

I have added the code and documentation to evaluate models on MTEB, you can find it here. It is compatible with the latest version of MTEB (which requires a prompt_name in encode)

vaibhavad commented 4 months ago

Closing as it is stale. Feel free to re-open if you have any more questions.

NouamaneELGueddarii commented 4 months ago

Thank you for your answer. I have tried to use the code you provided, but i seems tehre is an issue with the loading of adapter in PEFT. Here is the error: ImportError: cannot import name 'inject_adapter_in_model' from 'peft' (/opt/conda/lib/python3.11/site-packages/peft/init.py).

Do you also have the same error, or is it an issue on my end? Thank you!