ray-project / llmperf

LLMPerf is a library for validating and benchmarking LLMs
Apache License 2.0
470 stars 69 forks source link

Add support for 100+ LLMs - Anyscale, vertexai, ollama, perplexity, together ai, palm, openrouter #12

Closed ishaan-jaff closed 6 months ago

ishaan-jaff commented 7 months ago

This PR adds support for the above mentioned LLMs using LiteLLM https://github.com/BerriAI/litellm/ LiteLLM is a lightweight package to simplify LLM API calls - use any llm as a drop in replacement for gpt-3.5-turbo.

Example

from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages)

# anthropic call
response = completion(model="claude-instant-1", messages=messages)
ishaan-jaff commented 7 months ago

@waleedkadous @kylehh I thought litellm would be useful here to make benchmarking easier

noticed you have an outstanding PR for adding HF inference (which litellm supports already)

kylehh commented 6 months ago

LLMperf v2 released and LiteLLM was added as one of the LLM client Close this PR