openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Other
14.76k stars 2.58k forks source link

Fix formatting/typing so pre-commit hooks pass #1451

Closed ianmckenzie-oai closed 8 months ago

ianmckenzie-oai commented 9 months ago

(Not an eval)

One-line summary: Pre-commit hooks were failing. I identified the main cause, and then fixed all secondary pre-commit issues. I only changed the logic in one place, oiaevalset.py.

I was having issues with type-hinting and identified that the old typings directory was causing the from openai import OpenAI import to register as an error. I decided to go through and fix all the issues that appeared in pre-commit run --all-files.

NOTE:

The manual work involved was mainly:

  1. Deleting the typings directory, which was interfering with openai type-hints (such as from openai import OpenAI)
  2. Fixing type issues in oaievalset.py.
  3. Moving the client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) line below all the imports.
  4. Breaking lines of length >767 into smaller chunks using line continuation.

Thus this PR is broken into three parts:

  1. Deleting typings (first commit)
  2. Manually cleaning up issues (middle commits)
  3. Applying autofixes from the pre-commit hooks (last commit)