Open krrishdholakia opened 1 year ago
AFAIK LiteLLM does not support a managed caching solution, the docs only mention self-managed Redis.
Tracking this here - https://github.com/BerriAI/litellm/issues/432
Do you want us to provide a hosted caching solution? @nsbradford
TBH, not a top priority - writing my own cache also only takes <1 hour and then I don't have to worry about whether the 3rd-party caching middleware is reliable, which is why in practice I tend to write my own cache on larger projects
(would be open to it if implemented, though.)
https://github.com/nsbradford/SemanticSearch/blob/f8189d5bf05260af95584b0a9d878233b772a234/backend/llm.py#L13
Hey @nsbradford,
I saw you're logging responses to promptlayer but also using helicone. Curious - why?
If it's for caching - is there something in our implementation you think is missing - https://docs.litellm.ai/docs/caching/