EleutherAI / lm-evaluation-harness

A framework for few-shot evaluation of language models.
https://www.eleuther.ai
MIT License
5.82k stars 1.55k forks source link

How to use a vllm hosted model? #1963

Open darsh-essential opened 3 weeks ago

darsh-essential commented 3 weeks ago

Are there docs on best practices for using vllm hosted models?

I create a model using python -m vllm.entrypoints.openai.api_server --model model_path

and try running it as lm_eval --model local-chat-completions --model_args model=model_path,base_url=http://localhost:8000/v1 --tasks /home/darshshah/lm_eval/tasks/financebench_inference_binary --batch_size 12 --output_path /home/darshshah/lm_eval/tasks/financebench_inference_binary/outputs --log_samples

But get the following errors openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

chimezie commented 3 weeks ago

OPENAI_API_KEY needs to be set to "EMPTY" for locally hosted models, so I usually launch lm_eval this way for my own, non OpenAI models:

% OPENAI_API_KEY=EMPTY lm_eval --model local-completions [..etc..]