AI chat assistant for Obsidian with contextual awareness, smart writing assistance, and one-click edits. Features vault-aware conversations, semantic search, and local model support.
As title says. Please consider adding support to toggle Anthropic's prompt caching on and off. Especially with long notes or multiple notes being referenced in a static fashion, this would save a lot on token cost - while the initial tokens cached are 25% more expensive, once cached, the cost drops by 90% for reuse of the content (so you're only paying 10% of the regular token cost for cached content.)
As title says. Please consider adding support to toggle Anthropic's prompt caching on and off. Especially with long notes or multiple notes being referenced in a static fashion, this would save a lot on token cost - while the initial tokens cached are 25% more expensive, once cached, the cost drops by 90% for reuse of the content (so you're only paying 10% of the regular token cost for cached content.)
See:
https://www.anthropic.com/news/prompt-caching
https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching
https://www.anthropic.com/pricing#anthropic-api