I would like to use dspy for extensive Q/A and information extraction on very long input texts. Since dspy builds the prompt based on the Signature and appends input fields, the resulting prompt will change for every question. I want to ask whether we can instruct dspy to place the instruction after the input. This way, we can reuse the KV cache in a self-hosted LLM or save money with OpenAI's prompt caching.
Hi,
I would like to use dspy for extensive Q/A and information extraction on very long input texts. Since dspy builds the prompt based on the Signature and appends input fields, the resulting prompt will change for every question. I want to ask whether we can instruct dspy to place the instruction after the input. This way, we can reuse the KV cache in a self-hosted LLM or save money with OpenAI's prompt caching.
Thanks.