zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
6.89k stars 480 forks source link

[Feature]: please support google LLM #601

Open shixiao11 opened 5 months ago

shixiao11 commented 5 months ago

Is your feature request related to a problem? Please describe.

Hello team, My team was working on the Gen AI project and all the projects are base on the google cloud. So is it possible to make GPTCache integrated with google LLM(gemini or bisontext)

Describe the solution you'd like.

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

varunmehra5 commented 2 weeks ago

+1

SimFG commented 2 weeks ago

The number of large models is growing explosively, and I think it may not be meaningful to keep adding models. Maybe you can try to use the get and set API in gptcache, demo code: https://github.com/zilliztech/GPTCache/blob/main/examples/adapter/api.py