openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Other
14.36k stars 2.55k forks source link

Randomly select MMMU answer when none is returned from the model #1447

Closed etr2460 closed 6 months ago

etr2460 commented 6 months ago

This is the behavior MMMU used for evaluating, so we should match this here.

As an example this increased the mmmu-music benchmark from 0.3666 to 0.4 as multiple questions in that benchmark were unanswered by the model