abacaj / code-eval

Run evaluation on LLMs using human-eval benchmark
MIT License
379 stars 36 forks source link

fix command for evaluate_functional_correctness #2

Closed tmm1 closed 1 year ago

abacaj commented 1 year ago

Once you install human-eval dependency, you can call it directly without referencing the python file, https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html