BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
14.44k stars 1.7k forks source link

replace openai base with proxy w/ translation #120

Closed krrishdholakia closed 1 year ago

krrishdholakia commented 1 year ago

There’s a lot of different LLM deployment providers. How do I easily replace my OpenAI base with their url as a proxy? - https://github.com/petals-infra/chat.petals.dev/issues/20, https://www.banana.dev/, etc.

```python
def translate_function(model, messages, max_tokens):
    prompt = " ".join(message["content"] for message in messages)
    max_new_tokens = max_tokens
    return {"model": model, "prompt": prompt, "max_new_tokens": max_new_tokens} 

openai.api_base = litellm.translate_api_call(custom_api_base, translate_function)
```
krrishdholakia commented 1 year ago

Tracking 20 repos that might find this helpful:

ishaan-jaff commented 1 year ago

updates:

ishaan-jaff commented 1 year ago

How would this be possible ? When you set a url for OpenAI base it expects a server

openai.api_base = "https://localhost:500"

The proposed interface there is a function :

openai.api_base = litellm.translate_api_call(custom_api_base, translate_function)

Open to suggestions on this

ishaan-jaff commented 1 year ago

Tried doing something like this:

def custom_function():
  print("custom function called")

openai.api_base = custom_function

but openai calls @app.route('/chat/completions', methods=["POST"])

ishaan-jaff commented 1 year ago

closing this