Stability-AI / StableLM

StableLM: Stability AI Language Models
Apache License 2.0
15.84k stars 1.03k forks source link

Consider using OpenAI Evals #80

Open walking-octopus opened 1 year ago

walking-octopus commented 1 year ago

OpenAI Evals is an open-source crowdsourced collection of tests designed to evaluate many newly emergent capabilities in LLMs. While it may be slightly GPT-4-centric, as tests that it can easily pass don't get merged, it still remains a valuable tool for automatically benchmarking LLMs.

Being a well-designed standard test, this may allow us to compare different open-access models against each other or even OpenAI's offerings objectively and effortlessly, which may allow us to gain deeper insight into what works and what doesn't.

For reference on testing non OpenAI models with Evals, see OpenAssistant model evals.