huggingface / lighteval

Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
MIT License
687 stars 78 forks source link

Expose a few model predictions / gold answers in the logs #164

Closed lewtun closed 1 month ago

lewtun commented 5 months ago

For generative benchmarks like MATH / GSM8k / IFEval, it would be great to have some visibility in the logs on how the prompts are formatted, what the generations look like, what the gold answer is etc.

Currently, the best approach I've found is to first run the benchmark with --max_samples and then manually inspect the details Parquet file. However this is rather cumbersome, especially when launching many evals in parallel :)

Perhaps we can store the first N examples in the logs?

NathanHB commented 4 months ago

aren't the details also stored in json format ? that would make it easier for you to inspect them. otherwise, good idea to log out the first element of each task.