huggingface / lighteval

Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
MIT License
689 stars 78 forks source link

Add a logger in the metric functions #135

Closed NathanHB closed 1 month ago

NathanHB commented 6 months ago

It is currently cumbersome to log details of what is happening in metric functions, log judge prompt in llm as judge metric for example. Passing it the evaluation tracker would greatly simply this.