Open antonio-veezoo opened 3 weeks ago
Hey @antonio-veezoo ,
could you elaborate a bit? What kind of caching do you have in mind - prompt, context, etc
I'm thinking of the functionality described here for Claude https://www.anthropic.com/news/prompt-caching, and here for gemini https://cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-overview
ok, we can do the prompt caching for Anthropic models (was on our list anyway)
Does the library currently support caching for these providers in any way or do you have plans to add support?
Thanks for any info!