symflower / eval-dev-quality

DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
https://symflower.com/en/company/blog/2024/dev-quality-eval-v0.4.0-is-llama-3-better-than-gpt-4-for-generating-tests/
MIT License
57 stars 3 forks source link

Support multiple evaluation tasks #165

Open ahumenberger opened 3 weeks ago

ahumenberger commented 3 weeks ago

As of now, we only have a single evaluation task, that is, "provide tests for this code". In a future release we want to run other tasks as well, like "repair this compilation error". For this, we need structural changes.

Tasks: