Gemini with prompt caching fails with an exception:
litellm.APIConnectionError: GeminiException - Gemini Context Caching only supports 1 message/block of continuous messages. Your idx, messages were - [(0, {'content': [{'type': 'text', 'text': 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed answers to the user\'s questions.\n\n[1] The assistant can use a Python environment with <execute_ipython>...
Gemini is reported by liteLLM to support prompt caching, along with Claude, GPTs, and Deepseek. Unlike those though, this isn't automatic (ignoring silently the cache attribute), nor compatible with Anthropic.
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
Gemini with prompt caching fails with an exception:
Gemini is reported by liteLLM to support prompt caching, along with Claude, GPTs, and Deepseek. Unlike those though, this isn't automatic (ignoring silently the cache attribute), nor compatible with Anthropic.
OpenHands Installation
Development workflow
OpenHands Version
No response
Operating System
None
Logs, Errors, Screenshots, and Additional Context
No response