zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
7.25k stars 507 forks source link

[Feature]: please support google LLM #601

Open shixiao11 opened 10 months ago

shixiao11 commented 10 months ago

Is your feature request related to a problem? Please describe.

Hello team, My team was working on the Gen AI project and all the projects are base on the google cloud. So is it possible to make GPTCache integrated with google LLM(gemini or bisontext)

Describe the solution you'd like.

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

varunmehra5 commented 5 months ago

+1

SimFG commented 5 months ago

The number of large models is growing explosively, and I think it may not be meaningful to keep adding models. Maybe you can try to use the get and set API in gptcache, demo code: https://github.com/zilliztech/GPTCache/blob/main/examples/adapter/api.py