symflower / eval-dev-quality

DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
https://symflower.com/en/company/blog/2024/dev-quality-eval-v0.4.0-is-llama-3-better-than-gpt-4-for-generating-tests/
MIT License
57 stars 3 forks source link

Do an evaluation run for all "good open weight models" with all available quantizations and different GPUs #209

Open zimmski opened 6 days ago

zimmski commented 6 days ago

See https://www.reddit.com/r/LocalLLaMA/comments/1dlsxab/comment/l9rzjj7/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Not sure on how we should do that yet. CPU-only-inference will break us here, and speed-metrics are important as well.