issues
search
abacaj
/
code-eval
Run evaluation on LLMs using human-eval benchmark
MIT License
362
stars
34
forks
source link
Update eval_llama.py
#12
Closed
acrastt
closed
10 months ago