zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
7.25k stars 507 forks source link

[Feature]: GPTCache fully integrated with phidata #639

Open KevinZhang19870314 opened 3 months ago

KevinZhang19870314 commented 3 months ago

Is your feature request related to a problem? Please describe.

I faced a problem with phidata when using GPTCache. I can't integarate GPTCache with phidata.

Describe the solution you'd like.

I can use the llm in GPTCache and passed it in phidata. Example below:

from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat
from phi.tools.yfinance import YFinanceTools
from gptcache import cache
from gptcache.adapter import openai

assistant = Assistant(
    cache=cache.init(), # this line for cache
    llm=openai(model="gpt-4o"),  # this line for llm
    tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True)],
    show_tool_calls=True,
    markdown=True,
)
assistant.print_response("What is the stock price of NVDA")
assistant.print_response("Write a comparison between NVDA and AMD, use all tools available.")

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response