microsoft / FLAML

A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
https://microsoft.github.io/FLAML/
MIT License
3.76k stars 495 forks source link

Add support for Palm, Claude-2, Cohere Llama2, CodeLlama (100+LLMs) #1212

Open ishaan-jaff opened 10 months ago

ishaan-jaff commented 10 months ago

This PR adds support for the above mentioned LLMs using LiteLLM https://github.com/BerriAI/litellm/ LiteLLM is a lightweight package to simplify LLM API calls - use any llm as a drop in replacement for gpt-3.5-turbo.

Example

from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages)

# anthropic call
response = completion(model="claude-instant-1", messages=messages)

Why are these changes needed?

Related issue number

Checks

ishaan-jaff commented 10 months ago

@sonichi @qingyun-wu can i get a review on this pr ?

happy to add docs/tests of this initial commit looks good to you

qingyun-wu commented 10 months ago

Hi @ishaan-jaff, Thanks for the contribution. This support looks appealing! But is it a good way to directly replace openai_completion? I am concerned about the potential unexpected consequences of this replacement. How about adding a LiteLLM option and using LiteLLM when the user explicitly configures so?