huggingface / lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
MIT License
462 stars 53 forks source link

`Could not initialize the JudgeOpenAI model` and `openi` import error #166

Open lewtun opened 2 months ago

lewtun commented 2 months ago

FYI I'm seeing a lot of these "errors" in my logs when running lighteval

Could not initialize the JudgeOpenAI model:
[Errno 2] No such file or directory: 'src/lighteval/tasks/extended/mt_bench/judge_prompts.jsonl'

This doesn't cause an exception, but maybe good to clean up from the logs?

I also learned that you need the openai dep even if you don't run mt_bench due to an import error from this line: https://github.com/huggingface/lighteval/blob/fc428f600c40b214b4d784540d1a0db5ed193d30/src/lighteval/metrics/llm_as_judge.py#L30

I would recommend either promoting the openai dep to a core dep or having a checker like is_openai_available() then only importing the method if it is

NathanHB commented 2 months ago

thanks for the catch ! Did not pay attention to those on my local install, I will solve this asap :)