zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
6.96k stars 490 forks source link

[Bug]: KeyError: 'message' #528

Closed pavanpraneeth closed 9 months ago

pavanpraneeth commented 11 months ago

Current Behavior

When i do acheing with langchain for the first of a query say llm.predict('tell me a joke') it works and runs in 2 seconds , then when i again run the same llm.predict('tell me a joke') it throws this error. I am using AzureOpenai

57 @root_validator 58 def set_text(cls, values: Dict[str, Any]) -> Dict[str, Any]: 59 """Set the text attribute to be the contents of the message.""" ---> 60 values["text"] = values["message"].content 61 return values

KeyError: 'message'

Expected Behavior

It should typically run in in micro seconds or millis seconds

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

SimFG commented 11 months ago

From the current problem description, I don't seem to know what caused the error.

nikkoxgonzales commented 10 months ago

I have this error as well, 21 days still no reply hmm.

SimFG commented 10 months ago

@nikkoxgonzales I tried the langchain example a few days ago and got no errors. And if you want me to help you, please give me more details.

nikkoxgonzales commented 10 months ago

I did the example on langchain's website for gptcache.

I am using AzureOpenAI/AzureChatOpenAI when I got this [KeyError: 'message'].

SimFG commented 10 months ago

@nikkoxgonzales i can't find the error. maybe it's caused the Incompatibility of Azure api. but I don't have the Azure account. you can check the difference of the openai and Azure. the following is all my demo code:

import gptcache
from gptcache import Cache
from gptcache.manager.factory import manager_factory
from gptcache.processor.pre import get_prompt
import hashlib

from langchain.cache import GPTCache
import langchain
from langchain.llms import OpenAIChat

def get_hashed_name(name):
    return hashlib.sha256(name.encode()).hexdigest()

def init_gptcache(cache_obj: Cache, llm: str):
    hashed_llm = get_hashed_name(llm)
    cache_obj.init(
        pre_embedding_func=get_prompt,
        data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
    )

langchain.llm_cache = GPTCache(init_gptcache)

print("langchain:", langchain.__version__)
print("gptcache:", gptcache.__version__)

llm = OpenAIChat()
now = time.time()
print(llm("Tell me a joke"))
print(time.time() - now)
now = time.time()
print(llm("Tell me a joke"))
print(time.time() - now)

output:

langchain: 0.0.288
gptcache: 0.1.41
xxxx/venv/lib/python3.8/site-packages/langchain/llms/openai.py:787: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
  warnings.warn(
Sure, here's a joke for you: 

Why don't scientists trust atoms? 

Because they make up everything!
2.1356260776519775
Sure, here's a joke for you: 

Why don't scientists trust atoms? 

Because they make up everything!
0.0004391670227050781
SimFG commented 9 months ago

I will close the issue. If you have other problem, please open a new issue.