Closed Joffref closed 3 months ago
For more context, there are the following models exposed through this service:
If it's OpenAI-compatible then all that's needed is to add entries for the desired models (I'd suggest the Llama 3 variants to start) to llm_benchmark_suite.py
and generation of an API key that we can use in our runner.
hmm, looks like an API key isn't actually needed. Added in https://github.com/fixie-ai/ai-benchmarks/pull/78, should start showing up in stats tomorrow (2024-06-12)
Awesome work done here, it's been a few days I'm daily watching those metrics. Have you any idea why, when I'm clicking on ovh.net, it redirects me to a sort of speedtest instead of the official website? :+1:
It's just sending you to https://ovh.net, if there's another link we should use instead just LMK.
I think you should use https://endpoints.ai.cloud.ovh.net/ instead.
Care to make a PR? Just change line 207 here: https://github.com/fixie-ai/ai-benchmarks/pull/78/files
Recently OVHcloud, released a new product called AI Endpoints.
This product offers LLM on the shelf with a OpenAI compatible API.
For example, you can call Mixtral-8x22b using the following code snippet:
That'd be amazing to see it benched! I'd love to contribute to bring this feature up, how could I do?