issues
search
symflower
/
eval-dev-quality
DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
https://symflower.com/en/company/blog/2024/dev-quality-eval-v0.4.0-is-llama-3-better-than-gpt-4-for-generating-tests/
MIT License
57
stars
3
forks
source link
Extract human-readable names for models
#206
Closed
bauersimon
closed
2 days ago
bauersimon
commented
1 week ago
[x] For Ollama models
[x] For OpenRouter models
[x] For OpenAi models