symflower / eval-dev-quality

DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
https://symflower.com/en/company/blog/2024/dev-quality-eval-v0.4.0-is-llama-3-better-than-gpt-4-for-generating-tests/
MIT License
58 stars 3 forks source link

Log LLM queries and responses directly to files, to debug the evaluaion logic #222

Closed ruiAzevedo19 closed 2 days ago

ruiAzevedo19 commented 6 days ago

Part of #204

ruiAzevedo19 commented 2 days ago

Closing since we'll change the way we do logging