Open shixiao11 opened 10 months ago
+1
The number of large models is growing explosively, and I think it may not be meaningful to keep adding models. Maybe you can try to use the get and set API in gptcache, demo code: https://github.com/zilliztech/GPTCache/blob/main/examples/adapter/api.py
Is your feature request related to a problem? Please describe.
Hello team, My team was working on the Gen AI project and all the projects are base on the google cloud. So is it possible to make GPTCache integrated with google LLM(gemini or bisontext)
Describe the solution you'd like.
Describe an alternate solution.
No response
Anything else? (Additional Context)
No response