Closed krrishdholakia closed 1 year ago
The only change here is the api key
that you're accepting, can now be one of any of those llm providers (for llama 2 that's replicate)
No other changes to the code required.
If you accepted multiple keys, we could also do model fallbacks to prevent failed requests (in case one fails, switch to another provider).
Hi @bhaskatripathi ,
Noticed y'all are calling just OpenAI. I'm working on litellm (simple library to standardize LLM API Calls - https://github.com/BerriAI/litellm) and was wondering if we could be helpful.
Added support for Anthropic, Llama2, Cohere and PaLM by replacing the raw openai.ChatCompletion.create endpoint with completion from litellm.
Would love to know if this helps.