Is your feature request related to a problem? Please describe.
When using GPTcache, I currently have some LLM requests and corresponding answers. I want to cache and preheat them so that the program can directly match when running.
I hope to use similar matching. By default, the sqlite database and the faiss vector library will be built. However, there is no suitable method to put the data I need to preheat into two libraries.
I have tried using the gptcache.update() function for this operation, but the preheated data was only inserted into sqlite, and the faiss was not constructed correctly.
Describe the solution you'd like.
Can you provide an appropriate method for cache preheating operation? Thank you!
Is your feature request related to a problem? Please describe.
When using GPTcache, I currently have some LLM requests and corresponding answers. I want to cache and preheat them so that the program can directly match when running.
I hope to use similar matching. By default, the sqlite database and the faiss vector library will be built. However, there is no suitable method to put the data I need to preheat into two libraries. I have tried using the gptcache.update() function for this operation, but the preheated data was only inserted into sqlite, and the faiss was not constructed correctly.
Describe the solution you'd like.
Can you provide an appropriate method for cache preheating operation? Thank you!
Describe an alternate solution.
No response
Anything else? (Additional Context)
No response