Closed chuandudx closed 2 weeks ago
I suspect this model is not provided by the free version of inference endpoints on the fly - can you try with llama 3.1 70B for example, or command R +?
Thank you for the feedback! @JoelNiklaus figured out that it's because we should feed in use_transformers=True
when constructing the judge instance. Do you think it would be helpful to add an example like this in metrics.py
or as a note in the README?
Very good idea, please do add a note in the wiki! :hugs:
Issue encountered
While setting up the framework to evaluate using LLM-as-judge, it would be helpful to test end-to-end without special permissions like setting up openai_key or HF pro subscription. The current models in
src/lighteval/metrics/metrics.py
contain the following options:When trying to call the llama model, a free HF_TOKEN gives the following error:
Solution/Feature
I tried to define a new llm judge using a smaller model:
However, this gave a different error that I not been able to figure out how to resolve. There is an error related to using the OpenAI API even while the main intent was to call a tinyllama model.
Thank you!