Closed ishaan-jaff closed 11 months ago
@ishaan-jaff you can achieve this by customizing the pre_process_func
, and you can refer to last_content_without_prompt
I don't see pre_process_func
defined in the code base
Is last_content_without_prompt
called as the pre_proccess_func for gpt-3.5-turbo ?
Done: added some docs on doing this for our users at LiteLLM: https://docs.litellm.ai/docs/caching/gpt_cache#advanced-usage---set-custom-cache-keys
thanks @SimFG
Is your feature request related to a problem? Please describe.
I need to cache based on model + prompt as the key. Looks like gpt cache is only caching based on the prompt
Describe the solution you'd like.
Expose the ability to define the key for caching
Describe an alternate solution.
No response
Anything else? (Additional Context)
No response