BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.86k stars 1.5k forks source link

Team based caching support #2317

Open krrishdholakia opened 7 months ago

krrishdholakia commented 7 months ago

Allow admin to turn on/off caching per team

-- User Request

krrishdholakia commented 7 months ago

User wants to

We have cache controls per key today

Manouchehri commented 6 months ago

Loosely related to #2852.