Teachings / langgraph-learning

9 stars 9 forks source link

Issue with Ollama Function – Only Agent Response, No Tool Calls #1

Open Billian-D opened 2 days ago

Billian-D commented 2 days ago

Bug Description:

I have been running the tutorial langgraph_learning/tutorials/05-langgraph_state_management.py and encountered an issue where the ollama function returns only the agent's response but does not invoke any associated tools, as expected. I have ensured that all required libraries and dependencies are properly installed, but this behavior persists.

Here is the code snippet being used:

from langchain_experimental.llms.ollama_functions import OllamaFunctions

# Initialize OllamaFunctions with the necessary configuration
model = OllamaFunctions(
    base_url="http://localhost:11434",  # URL where Ollama is running
    model="llama3.2",  # Choose model version (e.g., llama3.1, llama3.2, etc.)
    format="json"  # Ensure the format is correctly set to json
)

Logs/Error Messages:

STATE at agent start: {'research_question': 'Tell me a joke?', 'tool_response': [], 'agent_response': [], 'agent_call_count': 0, 'tool_call_count': 0}
Paused ... Hit Enter to Execute Agent Logic...
STATE at agent end: {'research_question': 'Tell me a joke?', 'tool_response': "I've got one! Why couldn't the bicycle stand up by itself? Because it was two-tired!", 'agent_response': AIMessage(content="I've got one! Why couldn't the bicycle stand up by itself? Because it was two-tired!", id='run-e807e68f-ab1b-44b6-b0a3-5de53c9e0e95-0'), 'agent_call_count': 1, 'tool_call_count': 0}
Paused Hit Enter to go to Should Continue Logic...
STATE at should_continue start: {'research_question': 'Tell me a joke?', 'tool_response': "I've got one! Why couldn't the bicycle stand up by itself? Because it was two-tired!", 'agent_response': AIMessage(content="I've got one! Why couldn't the bicycle stand up by itself? Because it was two-tired!", id='run-e807e68f-ab1b-44b6-b0a3-5de53c9e0e95-0'), 'agent_call_count': 1, 'tool_call_count': 0}
Paused at Should Continue Start
Evaluating whether the Question is Answered by the tool response or not... Please wait...
Traceback (most recent call last):
  File "/home/agentic-workflow/langgraph-learning/tutorials/05-langgraph_state_management.py", line 205, in <module>
    result = app.invoke(state)
  File "/home/.local/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 1749, in invoke
    for chunk in self.stream(
  File "/home/.local/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 1477, in stream
    for _ in runner.tick(
  File "/home/.local/lib/python3.10/site-packages/langgraph/pregel/runner.py", line 58, in tick
    run_with_retry(t, retry_policy)
  File "/home/.local/lib/python3.10/site-packages/langgraph/pregel/retry.py", line 29, in run_with_retry
    task.proc.invoke(task.input, config)
  File "/home/.local/lib/python3.10/site-packages/langgraph/utils/runnable.py", line 412, in invoke
    input = context.run(step.invoke, input, config)
  File "/home/.local/lib/python3.10/site-packages/langgraph/utils/runnable.py", line 184, in invoke
    ret = context.run(self.func, input, **kwargs)
  File "/home/.local/lib/python3.10/site-packages/langgraph/graph/graph.py", line 95, in _route
    result = self.path.invoke(value, config)
  File "/home/.local/lib/python3.10/site-packages/langgraph/utils/runnable.py", line 176, in invoke
    ret = context.run(self.func, input, **kwargs)
  File "/home/poc/agentic-workflow/langgraph-learning/tutorials/05-langgraph_state_management.py", line 123, in should_continue
    result = category_generator.invoke({"research_question": state["research_question"],
  File "/home/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2879, in invoke
    input = context.run(step.invoke, input, config)
  File "/home/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4475, in invoke
    return self._call_with_config(
  File "/home/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1786, in _call_with_config
    context.run(
  File "/home/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 398, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
  File "/home/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4331, in _invoke
    output = call_func_with_variable_args(
  File "/home/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 398, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
  File "/home/.local/lib/python3.10/site-packages/langchain_experimental/llms/ollama_functions.py", line 132, in parse_response
    raise ValueError("`tool_calls` missing from AIMessage: {message}")
ValueError: `tool_calls` missing from AIMessage: {message}

Possible Cause: It seems like this error might be caused by version conflicts or a missing tool_calls attribute in the response from the OllamaFunctions agent.

Request: Could you please share the exact versions listed in your requirements.txt? I suspect that version conflicts could be causing this issue.

BennisonDevadoss commented 2 days ago

I am also facing the same issue, any update on it?

mtcl commented 2 days ago

I am on an international trip, and do not have access to my main computer atm. I will be back in about 10 days. Try replacing it with from langchain_ollama import ChatOllama instead and see if it works for you.