openai / human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"
MIT License
2.31k stars 330 forks source link

Fix type signature of read_problems function #9

Closed Linyxus closed 2 years ago

Linyxus commented 2 years ago

The read_problems function in human_eval.data is actually returning a dictionary, but it is wrongly declared to return a iterable of dictionaries:

def read_problems(evalset_file: str = HUMAN_EVAL) -> Iterable[Dict[str, Dict]]:

This is a simple fix of the typo.

hauntsaninja commented 2 years ago

Thank you!