langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.31k stars 15.46k forks source link

Langchain-Ollama: ChatOllama stopped responding and AgentExecutor only performs actions #28281

Closed miguelg719 closed 5 hours ago

miguelg719 commented 3 days ago

Checked other resources

Example Code

Main Example on GitHub

from langchain_ollama import ChatOllama

llm = ChatOllama(model="llama3.2")
llm.invoke("Sing a ballad of LangChain.") 

Error Message and Stack Trace (if applicable)

TypeError: 'NoneType' object is not iterable

Description

Tested with multiple people, the new version of Ollama must have changed output format but ChatOllama now cannot provide any text result.

System Info

System Information

OS: Darwin OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020 Python Version: 3.11.3 (main, Apr 19 2023, 18:49:55) [Clang 14.0.6 ]

Package Information

langchain_core: 0.3.19 langchain: 0.1.17 langchain_community: 0.0.37 langsmith: 0.1.144 langchain_experimental: 0.0.57 langchain_ollama: 0.2.0 langchain_openai: 0.1.6 langchain_text_splitters: 0.0.1

Optional packages not installed

langgraph langserve

Other Dependencies

aiohttp: 3.9.3 aiosqlite: 0.18.0 aleph-alpha-client: Installed. No version info available. anthropic: Installed. No version info available. arxiv: Installed. No version info available. assemblyai: Installed. No version info available. async-timeout: 4.0.2 atlassian-python-api: Installed. No version info available. azure-ai-documentintelligence: Installed. No version info available. azure-ai-formrecognizer: Installed. No version info available. azure-ai-textanalytics: Installed. No version info available. azure-cognitiveservices-speech: Installed. No version info available. azure-core: Installed. No version info available. azure-cosmos: Installed. No version info available. azure-identity: Installed. No version info available. azure-search-documents: Installed. No version info available. beautifulsoup4: 4.12.2 bibtexparser: Installed. No version info available. cassio: Installed. No version info available. chardet: 4.0.0 clarifai: Installed. No version info available. cloudpickle: 2.2.1 cohere: Installed. No version info available. couchbase: Installed. No version info available. dashvector: Installed. No version info available. databricks-vectorsearch: Installed. No version info available. dataclasses-json: 0.6.5 datasets: 2.14.6 dgml-utils: Installed. No version info available. docarray[hnswlib]: Installed. No version info available. elasticsearch: Installed. No version info available. esprima: Installed. No version info available. faiss-cpu: 1.8.0 faker: Installed. No version info available. feedparser: Installed. No version info available. fireworks-ai: Installed. No version info available. friendli-client: Installed. No version info available. geopandas: Installed. No version info available. gitpython: 3.1.43 google-cloud-documentai: Installed. No version info available. gql: Installed. No version info available. gradientai: Installed. No version info available. hdbcli: Installed. No version info available. hologres-vector: Installed. No version info available. html2text: Installed. No version info available. httpx: 0.27.0 httpx-sse: Installed. No version info available. huggingface_hub: 0.26.2 javelin-sdk: Installed. No version info available. jinja2: 3.1.3 jq: Installed. No version info available. jsonpatch: 1.33 jsonschema: 4.17.3 lxml: 4.9.2 manifest-ml: Installed. No version info available. markdownify: Installed. No version info available. motor: Installed. No version info available. msal: Installed. No version info available. mwparserfromhell: Installed. No version info available. mwxml: Installed. No version info available. newspaper3k: Installed. No version info available. nlpcloud: Installed. No version info available. numexpr: 2.8.4 numpy: 1.24.3 nvidia-riva-client: Installed. No version info available. oci: Installed. No version info available. ollama: 0.4.0 openai: 1.26.0 openapi-pydantic: Installed. No version info available. openlm: Installed. No version info available. oracle-ads: Installed. No version info available. oracledb: Installed. No version info available. orjson: 3.9.15 packaging: 23.2 pandas: 1.5.3 pdfminer-six: 20231228 pgvector: Installed. No version info available. praw: Installed. No version info available. premai: Installed. No version info available. presidio-analyzer: Installed. No version info available. presidio-anonymizer: Installed. No version info available. psychicapi: Installed. No version info available. py-trello: Installed. No version info available. pydantic: 2.10.1 pyjwt: 2.8.0 pymupdf: Installed. No version info available. pypdf: 4.2.0 pypdfium2: 4.30.0 pyspark: Installed. No version info available. PyYAML: 6.0.1 qdrant-client: Installed. No version info available. rank-bm25: Installed. No version info available. rapidfuzz: 3.9.0 rapidocr-onnxruntime: Installed. No version info available. rdflib: Installed. No version info available. requests: 2.32.3 requests-toolbelt: 1.0.0 rspace_client: Installed. No version info available. scikit-learn: 1.2.2 sentence-transformers: 3.2.1 SQLAlchemy: 1.4.39 sqlite-vss: Installed. No version info available. streamlit: Installed. No version info available. sympy: 1.13.1 tabulate: 0.9.0 telethon: Installed. No version info available. tenacity: 8.2.2 tidb-vector: Installed. No version info available. tiktoken: 0.6.0 timescale-vector: Installed. No version info available. torch: 2.5.1 tqdm: 4.67.0 transformers: 4.46.2 tree-sitter: Installed. No version info available. tree-sitter-languages: Installed. No version info available. typer: 0.13.0 typing-extensions: 4.12.2 upstash-redis: Installed. No version info available. vdms: Installed. No version info available. vowpal-wabbit-next: Installed. No version info available. xata: Installed. No version info available. xmltodict: Installed. No version info available.

rtuin commented 3 days ago

I'm also getting this error using qwen2.5-coder:7b model. Perhaps it helps to share the stack trace:

Traceback (most recent call last):
  [...redacted stack trace...]
    response = llm.invoke(messages)
               ^^^^^^^^^^^^^^^^^^^^
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
    self.generate_prompt(
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
    raise e
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
    self._generate_with_cache(
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 644, in _generate
    final_chunk = self._chat_stream_with_aggregation(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 558, in _chat_stream_with_aggregation
    tool_calls=_get_tool_calls_from_response(stream_resp),
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 70, in _get_tool_calls_from_response
    for tc in response["message"]["tool_calls"]:
TypeError: 'NoneType' object is not iterable

Edit: mention that i use a different model

Ruslando commented 3 days ago

Same thing happening with Llama3.1

Ruslando commented 3 days ago

Same thing happening with Llama3.1

Fernando7181 commented 3 days ago

I was having this problem using llama3 but once switched to llma3.1 everything is working fine and using base_model

AlbertoFormaggio1 commented 3 days ago

@Fernando7181 can it be the case that llama3.1 was already downloaded in your system, while llama3 was freshly downloaded after the update? I am having this problem with every model I am using (all of them pulled today from ollama)

pythongirl325 commented 3 days ago

I believe the Ollama 0.4.0 update changed how the tool call API works, it now returns None instead of having no tool_call key on the response message (https://github.com/ollama/ollama-python/blob/main/ollama/_types.py#L220)

https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L69

Here the condition should be response["message"]["tool_calls"] is not None instead of "tool_calls" in response["message"]

miguelg719 commented 3 days ago

@pythongirl325 great lead, attaching log to support this issue. Nonetheless, it's interesting that in my case it's able to select a tool but not generate a text output response, even when taking out tools and only using a simple ChatOllama call

| ERROR:backend.main:Error testing Ollama: 'NoneType' object is not iterable | Traceback (most recent call last): | File "/app/backend/main.py", line 58, in test_ollama | response = await ollama_chat_completion( | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/app/backend/agent/services.py", line 29, in ollama_chat_completion | response = await llm.ainvoke(messages) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 307, in ainvoke | llm_result = await self.agenerate_prompt( | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 796, in agenerate_prompt | return await self.agenerate( | ^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 756, in agenerate | raise exceptions[0] | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 924, in _agenerate_with_cache | result = await self._agenerate( | ^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 731, in _agenerate | final_chunk = await self._achat_stream_with_aggregation( | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 601, in _achat_stream_with_aggregation | tool_calls=_get_tool_calls_from_response(stream_resp), | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 70, in _get_tool_calls_from_response | for tc in response["message"]["tool_calls"]: | TypeError: 'NoneType' object is not iterable

Fernando7181 commented 3 days ago

@Fernando7181 can it be the case that llama3.1 was already downloaded in your system, while llama3 was freshly downloaded after the update? I am having this problem with every model I am using (all of them pulled today from ollama)

I don't think so because I downloaded not that long ago, and I'm using it for my vector database and RAG system and seems to be working just fine. I know that when I was using llama3 wasn't working

rrajakaec commented 3 days ago

How to resolve this issue? Do we need to re-download the llama3.2 model or we need to switch to ChatOpenAI. Do we need to wait till the issue is resolved by the support team.

From llm = ChatOllama(model='llama3.2', temperature=0) To llm = ChatOpenAI(model="llama3.2", api_key="ollama", base_url="http://localhost/11434", temperature=0)

pythongirl325 commented 3 days ago

@pythongirl325 great lead, attaching log to support this issue. Nonetheless, it's interesting that in my case it's able to select a tool but not generate a text output response, even when taking out tools and only using a simple ChatOllama call

I experienced this without any tools as well. I wanted to try and switch from using the ollama api directly to using the langchain library.

Here's the code I ran to get the issue:

import langchain
import langchain_ollama
from langchain_core.messages import HumanMessage, SystemMessage

model = langchain_ollama.ChatOllama(
    model="hermes3:8b"
)

messages = [
    SystemMessage(content="Transate the following from English to Italian."),
    HumanMessage(content="How are you?")
]

model.invoke(messages)

My stack trace looks pretty much like yours.

I have not used langchain before, so I might be doing something wrong here.

rtuin commented 3 days ago

@edmcman made a fix for this here: https://github.com/langchain-ai/langchain/pull/28291

edmcman commented 3 days ago

As a work-around, you can pip install 'ollama<0.4.0'

rrajakaec commented 3 days ago

As a work-around, you can pip install 'ollama<0.4.0'

After downgrading the ollama-0.4.0 to ollama-0.3.3, the issue got resolved.

lsukharn commented 3 days ago

pip install 'ollama<0.4.0' works for me. Thanks @edmcman

Fernando7181 commented 3 days ago

@pythongirl325 great lead, attaching log to support this issue. Nonetheless, it's interesting that in my case it's able to select a tool but not generate a text output response, even when taking out tools and only using a simple ChatOllama call

I experienced this without any tools as well. I wanted to try and switch from using the ollama api directly to using the langchain library.

Here's the code I ran to get the issue:

import langchain
import langchain_ollama
from langchain_core.messages import HumanMessage, SystemMessage

model = langchain_ollama.ChatOllama(
    model="hermes3:8b"
)

messages = [
    SystemMessage(content="Transate the following from English to Italian."),
    HumanMessage(content="How are you?")
]

model.invoke(messages)

My stack trace looks pretty much like yours.

I have not used langchain before, so I might be doing something wrong here.

This is how im doing mine


def ask(query: str):
    chain = rag_chain()
    result = chain["run"]({"input": query})
    print(result)

ask("What is 2 + 2?")```
hgudella commented 2 days ago

I believe the Ollama 0.4.0 update changed how the tool call API works, it now returns None instead of having no tool_call key on the response message (https://github.com/ollama/ollama-python/blob/main/ollama/_types.py#L220)

https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L69

Here the condition should be response["message"]["tool_calls"] is not None instead of "tool_calls" in response["message"]

This is the actual fix. Can we please create new version and publish with fix? Thank You!

Fernando7181 commented 2 days ago

I agree, i think response["message"]["tool_calls"] is not None Should be the right fix and we could close this issue

edmcman commented 2 days ago

That is the fix here: https://github.com/langchain-ai/langchain/pull/28291

jmorganca commented 2 days ago

Hi all, this is also fixed in the ollama package from version 0.4.1 onwards – so sorry about that: https://github.com/ollama/ollama-python/releases/tag/v0.4.1

pip install -U ollama
espositodaniele commented 1 day ago

Still having issue here:

Package Information

ollama 0.4.1 langchain 0.3.8 langchain-community 0.3.8 langchain-core 0.3.21 langchain-ollama 0.2.0

Code example

from langchain_ollama import ChatOllama

model = ChatOllama(model="llama3.2", temperature=0)
model.invoke("Chi è il presidente degli Stati Uniti?")

Error

TypeError Traceback (most recent call last) Cell In[34], line 4 1 from langchain_ollama import ChatOllama 3 model = ChatOllama(model="llama3.2", temperature=0) ----> 4 model.invoke("Chi è il presidente degli Stati Uniti?")

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, kwargs) 275 def invoke( 276 self, 277 input: LanguageModelInput, (...) 281 kwargs: Any, 282 ) -> BaseMessage: 283 config = ensure_config(config) 284 return cast( 285 ChatGeneration, --> 286 self.generate_prompt( 287 [self._convert_input(input)], 288 stop=stop, 289 callbacks=config.get("callbacks"), 290 tags=config.get("tags"), 291 metadata=config.get("metadata"), 292 run_name=config.get("run_name"), 293 run_id=config.pop("run_id", None), ... 76 ) 77 ) 78 return tool_calls

TypeError: 'NoneType' object is not iterable

edmcman commented 1 day ago

@espositodaniele Are you sure that is the entire traceback?

espositodaniele commented 1 day ago

here the full error:

{
    "name": "TypeError",
    "message": "'NoneType' object is not iterable",
    "stack": "---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[45], line 4
      1 from langchain_ollama import ChatOllama
      3 model = ChatOllama(model=\"llama3.1\", temperature=0)
----> 4 model.invoke(\"Chi è il presidente degli Stati Uniti?\")
      6 # from langchain_openai.chat_models import ChatOpenAI
      7 
      8 # model = ChatOpenAI(openai_api_key=OPENAI_API_KEY, model=MODEL)
      9 # model.invoke(\"Dimmi un gioco\")

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    275 def invoke(
    276     self,
    277     input: LanguageModelInput,
   (...)
    281     **kwargs: Any,
    282 ) -> BaseMessage:
    283     config = ensure_config(config)
    284     return cast(
    285         ChatGeneration,
--> 286         self.generate_prompt(
    287             [self._convert_input(input)],
    288             stop=stop,
    289             callbacks=config.get(\"callbacks\"),
    290             tags=config.get(\"tags\"),
    291             metadata=config.get(\"metadata\"),
    292             run_name=config.get(\"run_name\"),
    293             run_id=config.pop(\"run_id\", None),
    294             **kwargs,
    295         ).generations[0][0],
    296     ).message

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    778 def generate_prompt(
    779     self,
    780     prompts: list[PromptValue],
   (...)
    783     **kwargs: Any,
    784 ) -> LLMResult:
    785     prompt_messages = [p.to_messages() for p in prompts]
--> 786     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    641         if run_managers:
    642             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643         raise e
    644 flattened_outputs = [
    645     LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]
    646     for res in results
    647 ]
    648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    630 for i, m in enumerate(messages):
    631     try:
    632         results.append(
--> 633             self._generate_with_cache(
    634                 m,
    635                 stop=stop,
    636                 run_manager=run_managers[i] if run_managers else None,
    637                 **kwargs,
    638             )
    639         )
    640     except BaseException as e:
    641         if run_managers:

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    849 else:
    850     if inspect.signature(self._generate).parameters.get(\"run_manager\"):
--> 851         result = self._generate(
    852             messages, stop=stop, run_manager=run_manager, **kwargs
    853         )
    854     else:
    855         result = self._generate(messages, stop=stop, **kwargs)

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py:644, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    637 def _generate(
    638     self,
    639     messages: List[BaseMessage],
   (...)
    642     **kwargs: Any,
    643 ) -> ChatResult:
--> 644     final_chunk = self._chat_stream_with_aggregation(
    645         messages, stop, run_manager, verbose=self.verbose, **kwargs
    646     )
    647     generation_info = final_chunk.generation_info
    648     chat_generation = ChatGeneration(
    649         message=AIMessage(
    650             content=final_chunk.text,
   (...)
    654         generation_info=generation_info,
    655     )

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py:558, in ChatOllama._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    545 for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    546     if not isinstance(stream_resp, str):
    547         chunk = ChatGenerationChunk(
    548             message=AIMessageChunk(
    549                 content=(
    550                     stream_resp[\"message\"][\"content\"]
    551                     if \"message\" in stream_resp
    552                     and \"content\" in stream_resp[\"message\"]
    553                     else \"\"
    554                 ),
    555                 usage_metadata=_get_usage_metadata_from_generation_info(
    556                     stream_resp
    557                 ),
--> 558                 tool_calls=_get_tool_calls_from_response(stream_resp),
    559             ),
    560             generation_info=(
    561                 dict(stream_resp) if stream_resp.get(\"done\") is True else None
    562             ),
    563         )
    564         if final_chunk is None:
    565             final_chunk = chunk

File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py:70, in _get_tool_calls_from_response(response)
     68 if \"message\" in response:
     69     if \"tool_calls\" in response[\"message\"]:
---> 70         for tc in response[\"message\"][\"tool_calls\"]:
     71             tool_calls.append(
     72                 tool_call(
     73                     id=str(uuid4()),
   (...)
     76                 )
     77             )
     78 return tool_calls

TypeError: 'NoneType' object is not iterable"
}
edmcman commented 1 day ago

It does seem like the same problem. Did you restart your notebook kernel to ensure that it has the new ollama code?

espositodaniele commented 1 day ago

Thank you, I have restarted everything, and it seems to be working with the updates.

Fernando7181 commented 13 hours ago

glad that the issue was solved, so do we close this issue now?

miguelg719 commented 5 hours ago

Confirmed working on 0.4.1, closing the issue thanks everyone!