openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Other
15.11k stars 2.62k forks source link

Add support for gpt-4o #1530

Open androettop opened 6 months ago

androettop commented 6 months ago

This is not a mr to add evals, it simply adds support for using gpt-4o

1529

lucapericlp commented 2 months ago

@andrew-openai @etr2460 @katyhshi bump on this PR, I was quite surprised to find out that OpenAI's own evals doesn't support their own latest model! If there's anything blocking this PR, might be worth relaying the reason for delays in merging? Thanks!

sakher commented 2 months ago

Are you guys sunsetting evals? Any plans to support o1 as well?