Open Sendery opened 1 year ago
Do you mean that the gptcache in langchain can't work well now? reference: https://python.langchain.com/docs/modules/model_io/models/llms/integrations/llm_caching#gptcache
It work if you use this langchain implementation:
import langchain
from langchain.llms import OpenAI
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
#####
langchain.llm_cache = GPTCache(init_gptcache)
llm("Tell me a joke") # simple query to the llm
# This query uses/returns langchain.schema.Generation
but with for more complex querys with prompt template usage:
import langchain
chat_chain=langchain.LLMChain(
llm=ChatOpenAI(model="gpt-3.5-turbo",temperature=0,cache=True),
prompt=prompt_template,
verbose=True,
)
langchain.llm_cache= GPTCache(init_gptcache)
chat_chain.predict(prompt_template_input_text_1="Tell me a joke",
prompt_temlpate_input_text_2=contex_embeddings) # More 'complex' query
# This query uses/returns langchain.schema.ChatGeneration
GPTCache does not support langchain.schema.ChatGeneration
Can other caches work well? like sqlite cache
I'm not sure if it's because langchain has modified something that caused the cache not to work or for other reasons, because GPTCache implements the cache capability based on the interface provided by langchain before, and there may be incompatibility now.
@SimFG For chat models, LangChain is inheriting from langchain.chat_models.base.BaseChatModel
instead of langchain.llms.base.BaseLLM
.
@Sendery I would like to check this problem whether is fixed or solved?
Is your feature request related to a problem? Please describe.
I'm using langchain and started a GPTCache integration, but after a few attempts I did manage to config everything as I whised then i start testing and:
ValueError: GPTCache only supports caching of normal LLM generations, got <class 'langchain.schema.ChatGeneration'>
I understand that this is not a real error but from langchain the start suggesting use this schemas, wich suport text and embbedings mixed.
Describe the solution you'd like.
I did track the error to this function:
RETURN_VAL_TYPE IS A langchain.schema.Generation and well the return_val is not, so the code works as expected
I did try to this changes:
Describe an alternate solution.
No response
Anything else? (Additional Context)
Thank you for your time, and work!