openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Other
14.36k stars 2.55k forks source link

Support multiple completions for ModelbasedClassify #1484

Open tom-christie opened 3 months ago

tom-christie commented 3 months ago

Describe the feature or improvement you're requesting

It would be nice to be able to score multiple sample completions using ModelBasedClassify. Even if n>1 is passed into a completion function and multiple samples are returned, only the first is graded because of this line:

https://github.com/openai/evals/blob/main/evals/elsuite/utils.py#L193

Additional context

I would like to be able to raise the temperature, ask a model to produce N completions, and have each completion graded separately using a rubric. This appears to work fine for non-model-based scoring.