zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
7.25k stars 507 forks source link

[Bug]: Support for GPT4o-mini or gpt4- models #657

Open oussamaJmaaa opened 1 month ago

oussamaJmaaa commented 1 month ago

Hello, I'm having trouble using the gpt-4-o-mini model with LangChain or OpenAI alongside GPTCache.

Any idea how to solve it? Also, could you please share your OpenAI and GPTCache versions?

Here is my current setup:

from langchain.llms import OpenAI
from gptcache.adapter.langchain_models import LangChainLLMs

llm = LangChainLLMs(llm=OpenAI(model="gpt-4-o-mini", temperature=0))

When I run my QA chain, I get this error: "This is a chat model and not supported in the v1/completions endpoint"

I tried upgrading my OpenAI version to 1.51.2 and then encountered this error: "module 'openai' has no attribute 'api_base'. Did you mean: 'api_type'?"

Any help would be greatly appreciated! Thanks in advance.