LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
MIT License
462
stars
53
forks
source link
`Could not initialize the JudgeOpenAI model` and `openi` import error #166
FYI I'm seeing a lot of these "errors" in my logs when running lighteval
This doesn't cause an exception, but maybe good to clean up from the logs?
I also learned that you need the
openai
dep even if you don't run mt_bench due to an import error from this line: https://github.com/huggingface/lighteval/blob/fc428f600c40b214b4d784540d1a0db5ed193d30/src/lighteval/metrics/llm_as_judge.py#L30I would recommend either promoting the
openai
dep to a core dep or having a checker likeis_openai_available()
then only importing the method if it is