Open jkomoros opened 1 year ago
Hi @jkomoros I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm
TLDR:
We allow you to use any LLM as a drop in replacement for gpt-3.5-turbo
.
You can use our proxy server or spin up your own proxy server using LiteLLM
This calls the provider API directly
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-key" #
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# falcon call
response = completion(model="falcon-40b", messages=messages)
countTokens
results for strings (since for some providers they go to the server)changeme
API key)getAPIKey
were changed to by default not throw but return '')