openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Other
14.36k stars 2.55k forks source link

Fix small typo in oaieval run function #1438

Closed inwaves closed 6 months ago

inwaves commented 6 months ago

This fixes a small typo in oaieval.py where additional_completion_args was additonal_completion_args. Running pre-commit also removed an unused import.