BerriAI / litellm

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.23k stars 1.42k forks source link

[Feature]: Support cloudflare AI Gateway - universal link #1158

Open shuther opened 9 months ago

shuther commented 9 months ago

The Feature

CF has a free offer for this gateway but there are multiple url for each endpoint and one universal entry point CF link. The idea is to connect to this universal entry point, and not to specific ones.

Motivation, pitch

this gateway should be seen as an observer third party.

Twitter / LinkedIn details

No response

krrishdholakia commented 9 months ago

oh - what does 'free offer' mean? @shuther

shuther commented 9 months ago

some limits but while future pricing has been shared, it is not yet available; we don't pay for the usage as it is not considered production ready. So good for testing.

Manouchehri commented 8 months ago

That's the pricing for Workers AI, not AI Gateway. AI Gateway will likely be free (or close to it) forever.

shuther commented 8 months ago

yes, but connecting to the API Gateway is interesting as an observer on the edge (but their approach in term of entry point is not practical). And connecting to the cloudflare workers AI is interesting in term of offer. Therefore, connecting from litellm to Cloudflare API Gateway seems a compelling offer; or do you have a concern with it?

Manouchehri commented 8 months ago

Oh I love AI Gateway; I think I’m the first LiteLLM user who started to use AI Gateway with Azure OpenAI on LiteLLM. It’s super nice for troubleshooting.

What would LiteLLM gain by using the universal endpoint though? LiteLLM already has fallbacks.

shuther commented 8 months ago

Oh I love AI Gateway; I think I’m the first LiteLLM user who started to use AI Gateway with Azure OpenAI on LiteLLM. It’s super nice for troubleshooting.

What would LiteLLM gain by using the universal endpoint though? LiteLLM already has fallbacks.

So you are using litellm->ai gateway->azure? I would prefer to have the flexibility to change the model from the application (mix between open ai and mistral) so I would need to leverage the universal endpoint to manage the connection (fall back is more cloudflare vs open router)