langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.29k stars 15.24k forks source link

Ollama (Partner package) and cache integration not working correctly - missing filters / Community Package works #25712

Open ksachdeva opened 2 months ago

ksachdeva commented 2 months ago

Checked other resources

Example Code

from langchain_community.cache import SQLiteCache
from langchain_ollama import OllamaEmbeddings, OllamaLLM

llm = OllamaLLM(
            model=model,
            cache=SQLiteCache(str(cache_dir / f"ollama-{model}.db")),
            temperature=0.4,
            num_ctx=8192,
            num_predict=-1,
        )

Error Message and Stack Trace (if applicable)

There is no error stack as the problem is how the LLM message is being cached in SQLLite

Description

Here is how the entries in SQLiteCache looks when langchain-ollama partner package is used

image

Whereas if the Ollama from langchain_community is used then the SQLLiteCache looks like

image

As you can see that the entries in filter column do not include other properties like temperature, model name etc and hence when these parameters are changes the old entries for a prompt if present are picked instead of creating new

System Info

langchain==0.2.12 langchain-chroma==0.1.2 langchain-community==0.2.11 langchain-core==0.2.28 langchain-ollama==0.1.1 langchain-openai==0.1.20 langchain-text-splitters==0.2.2

efriis commented 2 months ago

Try using set_llm_cache instead: https://python.langchain.com/v0.2/docs/how_to/llm_caching/

Where did you see the cache= parameter documented?

ksachdeva commented 2 months ago

https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html#langchain_core.language_models.llms.BaseLLMIt is part of all LLM classes. See the documentation of BaseLLM. Also setting it globally is not necessarily a good idea. On Aug 23, 2024, at 3:38 PM, Erick Friis @.***> wrote: Try using set_llm_cache instead: https://python.langchain.com/v0.2/docs/how_to/llm_caching/ Where did you see the cache= parameter documented?

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

ksachdeva commented 2 months ago

@efriis it is okay that you are unaware of the API but I am not sure what the rush was to close the issue. What made you think set_llm_cache would not cause the same issue?

At a minimum, you should have asked to try set_llm_cache and see if it is a workaround with out closing the issue. This is not a good etiquette. Please re-open this issue

@hwchase17

efriis commented 2 months ago

Reopening! Thanks for the link