issues
search
abacaj
/
code-eval
Run evaluation on LLMs using human-eval benchmark
MIT License
379
stars
36
forks
source link
Update eval_llama.py
#12
Closed
acrastt
closed
1 year ago