DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
57
stars
3
forks
source link
Follow-up "Code repairing task to enable models to fix code with compilation errors" #200
Open
ruiAzevedo19 opened 1 week ago
Follow-ups
mistakes
testdata only once https://github.com/symflower/eval-dev-quality/pull/170#discussion_r1642925996