openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Other
14.36k stars 2.55k forks source link

Improve MMMU performance with prompt engineering #1450

Closed etr2460 closed 6 months ago

etr2460 commented 6 months ago

With this improvement we now have a 0-shot performance of 59.6% (averaged over 3 eval runs) on the MMMU validation set, which beats the 56.8% reported in the MMMU paper