DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
57
stars
3
forks
source link
Script for sequentially evaluating common models with "light" repository #189
Closed
bauersimon closed 2 weeks ago