zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
7.14k stars 503 forks source link

[Feature]: How to perform cache preheating? #579

Open Ouwzhong opened 10 months ago

Ouwzhong commented 10 months ago

Is your feature request related to a problem? Please describe.

When using GPTcache, I currently have some LLM requests and corresponding answers. I want to cache and preheat them so that the program can directly match when running.

I hope to use similar matching. By default, the sqlite database and the faiss vector library will be built. However, there is no suitable method to put the data I need to preheat into two libraries. I have tried using the gptcache.update() function for this operation, but the preheated data was only inserted into sqlite, and the faiss was not constructed correctly.

Describe the solution you'd like.

Can you provide an appropriate method for cache preheating operation? Thank you!

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

SimFG commented 10 months ago

maybe you can try to use the cache.import_data(), reference: https://github.com/zilliztech/GPTCache/blob/main/tests/integration_tests/examples/sqlite_faiss_mock/test_example_sqlite_faiss.py