Closed miguelg719 closed 5 hours ago
I'm also getting this error using qwen2.5-coder:7b
model. Perhaps it helps to share the stack trace:
Traceback (most recent call last):
[...redacted stack trace...]
response = llm.invoke(messages)
^^^^^^^^^^^^^^^^^^^^
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 644, in _generate
final_chunk = self._chat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 558, in _chat_stream_with_aggregation
tool_calls=_get_tool_calls_from_response(stream_resp),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my_anonimized_project_dir/.venv/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 70, in _get_tool_calls_from_response
for tc in response["message"]["tool_calls"]:
TypeError: 'NoneType' object is not iterable
Edit: mention that i use a different model
Same thing happening with Llama3.1
Same thing happening with Llama3.1
I was having this problem using llama3 but once switched to llma3.1 everything is working fine and using base_model
@Fernando7181 can it be the case that llama3.1 was already downloaded in your system, while llama3 was freshly downloaded after the update? I am having this problem with every model I am using (all of them pulled today from ollama)
I believe the Ollama 0.4.0 update changed how the tool call API works, it now returns None
instead of having no tool_call
key on the response message (https://github.com/ollama/ollama-python/blob/main/ollama/_types.py#L220)
Here the condition should be response["message"]["tool_calls"] is not None
instead of "tool_calls" in response["message"]
@pythongirl325 great lead, attaching log to support this issue. Nonetheless, it's interesting that in my case it's able to select a tool but not generate a text output response, even when taking out tools and only using a simple ChatOllama call
| ERROR:backend.main:Error testing Ollama: 'NoneType' object is not iterable | Traceback (most recent call last): | File "/app/backend/main.py", line 58, in test_ollama | response = await ollama_chat_completion( | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/app/backend/agent/services.py", line 29, in ollama_chat_completion | response = await llm.ainvoke(messages) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 307, in ainvoke | llm_result = await self.agenerate_prompt( | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 796, in agenerate_prompt | return await self.agenerate( | ^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 756, in agenerate | raise exceptions[0] | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 924, in _agenerate_with_cache | result = await self._agenerate( | ^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 731, in _agenerate | final_chunk = await self._achat_stream_with_aggregation( | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 601, in _achat_stream_with_aggregation | tool_calls=_get_tool_calls_from_response(stream_resp), | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 70, in _get_tool_calls_from_response | for tc in response["message"]["tool_calls"]: | TypeError: 'NoneType' object is not iterable
@Fernando7181 can it be the case that llama3.1 was already downloaded in your system, while llama3 was freshly downloaded after the update? I am having this problem with every model I am using (all of them pulled today from ollama)
I don't think so because I downloaded not that long ago, and I'm using it for my vector database and RAG system and seems to be working just fine. I know that when I was using llama3 wasn't working
How to resolve this issue? Do we need to re-download the llama3.2 model or we need to switch to ChatOpenAI. Do we need to wait till the issue is resolved by the support team.
From llm = ChatOllama(model='llama3.2', temperature=0) To llm = ChatOpenAI(model="llama3.2", api_key="ollama", base_url="http://localhost/11434", temperature=0)
@pythongirl325 great lead, attaching log to support this issue. Nonetheless, it's interesting that in my case it's able to select a tool but not generate a text output response, even when taking out tools and only using a simple ChatOllama call
I experienced this without any tools as well. I wanted to try and switch from using the ollama api directly to using the langchain library.
Here's the code I ran to get the issue:
import langchain
import langchain_ollama
from langchain_core.messages import HumanMessage, SystemMessage
model = langchain_ollama.ChatOllama(
model="hermes3:8b"
)
messages = [
SystemMessage(content="Transate the following from English to Italian."),
HumanMessage(content="How are you?")
]
model.invoke(messages)
My stack trace looks pretty much like yours.
I have not used langchain before, so I might be doing something wrong here.
@edmcman made a fix for this here: https://github.com/langchain-ai/langchain/pull/28291
As a work-around, you can pip install 'ollama<0.4.0'
As a work-around, you can
pip install 'ollama<0.4.0'
After downgrading the ollama-0.4.0 to ollama-0.3.3, the issue got resolved.
pip install 'ollama<0.4.0' works for me. Thanks @edmcman
@pythongirl325 great lead, attaching log to support this issue. Nonetheless, it's interesting that in my case it's able to select a tool but not generate a text output response, even when taking out tools and only using a simple ChatOllama call
I experienced this without any tools as well. I wanted to try and switch from using the ollama api directly to using the langchain library.
Here's the code I ran to get the issue:
import langchain import langchain_ollama from langchain_core.messages import HumanMessage, SystemMessage model = langchain_ollama.ChatOllama( model="hermes3:8b" ) messages = [ SystemMessage(content="Transate the following from English to Italian."), HumanMessage(content="How are you?") ] model.invoke(messages)
My stack trace looks pretty much like yours.
I have not used langchain before, so I might be doing something wrong here.
This is how im doing mine
def ask(query: str):
chain = rag_chain()
result = chain["run"]({"input": query})
print(result)
ask("What is 2 + 2?")```
I believe the Ollama 0.4.0 update changed how the tool call API works, it now returns
None
instead of having notool_call
key on the response message (https://github.com/ollama/ollama-python/blob/main/ollama/_types.py#L220)Here the condition should be
response["message"]["tool_calls"] is not None
instead of"tool_calls" in response["message"]
This is the actual fix. Can we please create new version and publish with fix? Thank You!
I agree, i think response["message"]["tool_calls"] is not None
Should be the right fix and we could close this issue
That is the fix here: https://github.com/langchain-ai/langchain/pull/28291
Hi all, this is also fixed in the ollama
package from version 0.4.1 onwards – so sorry about that: https://github.com/ollama/ollama-python/releases/tag/v0.4.1
pip install -U ollama
Still having issue here:
ollama 0.4.1 langchain 0.3.8 langchain-community 0.3.8 langchain-core 0.3.21 langchain-ollama 0.2.0
from langchain_ollama import ChatOllama
model = ChatOllama(model="llama3.2", temperature=0)
model.invoke("Chi è il presidente degli Stati Uniti?")
TypeError Traceback (most recent call last) Cell In[34], line 4 1 from langchain_ollama import ChatOllama 3 model = ChatOllama(model="llama3.2", temperature=0) ----> 4 model.invoke("Chi è il presidente degli Stati Uniti?")
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, kwargs) 275 def invoke( 276 self, 277 input: LanguageModelInput, (...) 281 kwargs: Any, 282 ) -> BaseMessage: 283 config = ensure_config(config) 284 return cast( 285 ChatGeneration, --> 286 self.generate_prompt( 287 [self._convert_input(input)], 288 stop=stop, 289 callbacks=config.get("callbacks"), 290 tags=config.get("tags"), 291 metadata=config.get("metadata"), 292 run_name=config.get("run_name"), 293 run_id=config.pop("run_id", None), ... 76 ) 77 ) 78 return tool_calls
TypeError: 'NoneType' object is not iterable
@espositodaniele Are you sure that is the entire traceback?
here the full error:
{
"name": "TypeError",
"message": "'NoneType' object is not iterable",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[45], line 4
1 from langchain_ollama import ChatOllama
3 model = ChatOllama(model=\"llama3.1\", temperature=0)
----> 4 model.invoke(\"Chi è il presidente degli Stati Uniti?\")
6 # from langchain_openai.chat_models import ChatOpenAI
7
8 # model = ChatOpenAI(openai_api_key=OPENAI_API_KEY, model=MODEL)
9 # model.invoke(\"Dimmi un gioco\")
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
275 def invoke(
276 self,
277 input: LanguageModelInput,
(...)
281 **kwargs: Any,
282 ) -> BaseMessage:
283 config = ensure_config(config)
284 return cast(
285 ChatGeneration,
--> 286 self.generate_prompt(
287 [self._convert_input(input)],
288 stop=stop,
289 callbacks=config.get(\"callbacks\"),
290 tags=config.get(\"tags\"),
291 metadata=config.get(\"metadata\"),
292 run_name=config.get(\"run_name\"),
293 run_id=config.pop(\"run_id\", None),
294 **kwargs,
295 ).generations[0][0],
296 ).message
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
778 def generate_prompt(
779 self,
780 prompts: list[PromptValue],
(...)
783 **kwargs: Any,
784 ) -> LLMResult:
785 prompt_messages = [p.to_messages() for p in prompts]
--> 786 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
641 if run_managers:
642 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643 raise e
644 flattened_outputs = [
645 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
646 for res in results
647 ]
648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
630 for i, m in enumerate(messages):
631 try:
632 results.append(
--> 633 self._generate_with_cache(
634 m,
635 stop=stop,
636 run_manager=run_managers[i] if run_managers else None,
637 **kwargs,
638 )
639 )
640 except BaseException as e:
641 if run_managers:
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
849 else:
850 if inspect.signature(self._generate).parameters.get(\"run_manager\"):
--> 851 result = self._generate(
852 messages, stop=stop, run_manager=run_manager, **kwargs
853 )
854 else:
855 result = self._generate(messages, stop=stop, **kwargs)
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py:644, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
637 def _generate(
638 self,
639 messages: List[BaseMessage],
(...)
642 **kwargs: Any,
643 ) -> ChatResult:
--> 644 final_chunk = self._chat_stream_with_aggregation(
645 messages, stop, run_manager, verbose=self.verbose, **kwargs
646 )
647 generation_info = final_chunk.generation_info
648 chat_generation = ChatGeneration(
649 message=AIMessage(
650 content=final_chunk.text,
(...)
654 generation_info=generation_info,
655 )
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py:558, in ChatOllama._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
545 for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
546 if not isinstance(stream_resp, str):
547 chunk = ChatGenerationChunk(
548 message=AIMessageChunk(
549 content=(
550 stream_resp[\"message\"][\"content\"]
551 if \"message\" in stream_resp
552 and \"content\" in stream_resp[\"message\"]
553 else \"\"
554 ),
555 usage_metadata=_get_usage_metadata_from_generation_info(
556 stream_resp
557 ),
--> 558 tool_calls=_get_tool_calls_from_response(stream_resp),
559 ),
560 generation_info=(
561 dict(stream_resp) if stream_resp.get(\"done\") is True else None
562 ),
563 )
564 if final_chunk is None:
565 final_chunk = chunk
File ~/aidev/pdf-rag/.venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py:70, in _get_tool_calls_from_response(response)
68 if \"message\" in response:
69 if \"tool_calls\" in response[\"message\"]:
---> 70 for tc in response[\"message\"][\"tool_calls\"]:
71 tool_calls.append(
72 tool_call(
73 id=str(uuid4()),
(...)
76 )
77 )
78 return tool_calls
TypeError: 'NoneType' object is not iterable"
}
It does seem like the same problem. Did you restart your notebook kernel to ensure that it has the new ollama code?
Thank you, I have restarted everything, and it seems to be working with the updates.
glad that the issue was solved, so do we close this issue now?
Confirmed working on 0.4.1, closing the issue thanks everyone!
Checked other resources
Example Code
Main Example on GitHub
Error Message and Stack Trace (if applicable)
TypeError: 'NoneType' object is not iterable
Description
Tested with multiple people, the new version of Ollama must have changed output format but ChatOllama now cannot provide any text result.
System Info
System Information
Package Information
Optional packages not installed
Other Dependencies