Open ksachdeva opened 2 months ago
Try using set_llm_cache
instead: https://python.langchain.com/v0.2/docs/how_to/llm_caching/
Where did you see the cache=
parameter documented?
https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html#langchain_core.language_models.llms.BaseLLMIt is part of all LLM classes. See the documentation of BaseLLM. Also setting it globally is not necessarily a good idea. On Aug 23, 2024, at 3:38 PM, Erick Friis @.***> wrote: Try using set_llm_cache instead: https://python.langchain.com/v0.2/docs/how_to/llm_caching/ Where did you see the cache= parameter documented?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
@efriis it is okay that you are unaware of the API but I am not sure what the rush was to close the issue. What made you think set_llm_cache
would not cause the same issue?
At a minimum, you should have asked to try set_llm_cache
and see if it is a workaround with out closing the issue. This is not a good etiquette. Please re-open this issue
@hwchase17
Reopening! Thanks for the link
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
There is no error stack as the problem is how the LLM message is being cached in SQLLite
Description
Here is how the entries in SQLiteCache looks when langchain-ollama partner package is used
Whereas if the Ollama from langchain_community is used then the SQLLiteCache looks like
As you can see that the entries in filter column do not include other properties like temperature, model name etc and hence when these parameters are changes the old entries for a prompt if present are picked instead of creating new
System Info
langchain==0.2.12 langchain-chroma==0.1.2 langchain-community==0.2.11 langchain-core==0.2.28 langchain-ollama==0.1.1 langchain-openai==0.1.20 langchain-text-splitters==0.2.2