[X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
Issue Content
Gemini now allows a developer to create a context cache with the system instructions, contents, tools, and model information already set, and then reference this context as part of a standard query. It must be explicitly cached (ie - it is not automatic as part of a request or reply) and a cache expiration can be set (and later changed).
It does not appear to be supported in Vertex AI at this time.
Open issues:
Best paradigm to add to cache or integrate with LangChain history system
Privileged issue
Issue Content
Gemini now allows a developer to create a context cache with the system instructions, contents, tools, and model information already set, and then reference this context as part of a standard query. It must be explicitly cached (ie - it is not automatic as part of a request or reply) and a cache expiration can be set (and later changed).
It does not appear to be supported in Vertex AI at this time.
Open issues:
References: