symflower / eval-dev-quality

DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
https://symflower.com/en/company/blog/2024/dev-quality-eval-v0.4.0-is-llama-3-better-than-gpt-4-for-generating-tests/
MIT License
58 stars 3 forks source link

Apply "symflower fix" to a "write-test" result of a model when it errors, so model responses can possibly be fixed #229

Closed ruiAzevedo19 closed 1 day ago

ruiAzevedo19 commented 4 days ago

Part of #213

ruiAzevedo19 commented 1 day ago

Follow-up: Don't run symflower fix if there is a timeout on the LLM response

bauersimon commented 1 day ago

@ruiAzevedo19 LGTM plz take a look as well at my changes

ruiAzevedo19 commented 1 day ago

@bauersimon Also LGTM