langchain-ai / langgraph

Build resilient language agents as graphs.
https://langchain-ai.github.io/langgraph/
MIT License
5.92k stars 930 forks source link

Spurious validation error #653

Closed francisjervis closed 2 months ago

francisjervis commented 3 months ago

Checked other resources

Example Code

The code below works.

from langchain_core.messages import AnyMessage, HumanMessage, SystemMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.utils.function_calling import convert_to_openai_function

import json

def main():
    '''
    This tool uses the OpenAI API to generate a response based on the last message in the state.
    '''
    print("choose_next_action")
    messages = [SystemMessage(content="You are a sociologist conducting a semi-structured interview. Respond only in valid JSON."), AIMessage(content="Why do you have a cat?"), HumanMessage(content="I am a student.") ]

    print(messages)

    tools = [
                {
                    "name": "evaluate_answer",
                    "description": "Decide what to do based on the content of an interviewee's response",
                    "parameters": {
                        "required": [
                            "next_action"
                        ],
                        "properties": {
                            "next_action": {
                                "enum": [
                                    "probing_question",
                                    "clarifying_question",
                                    "next_question"
                                ]
                            }
                        },
                        "type": "object"
                    }
                }
            ]

    functions = [convert_to_openai_function(t) for t in tools]

    print(functions[0])

    # prompt = AIMessagePromptTemplate.from_messages(messages)
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    response = llm.invoke(messages, functions=functions).content

    next_action = json.loads(response)

    print(next_action["next_action"])

    return response

if __name__ == "__main__":
    main()

When the same function is converted to a ToolNode, it crashes.

Error Message and Stack Trace (if applicable)

/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The method `BaseTool.__call__` was deprecated in langchain-core 0.1.47 and will be removed in 0.3.0. Use invoke instead.
  warn_deprecated(
2024-06-12 13:29:32 - 1 validation error for choose_next_actionSchema
state
  field required (type=value_error.missing)
Traceback (most recent call last):
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/chainlit/utils.py", line 44, in wrapper
    return await user_function(**params_values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/demo.py", line 102, in on_message
    state = await graph.ainvoke(state)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1456, in ainvoke
    async for chunk in self.astream(
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1292, in astream
    _panic_or_proceed(done, inflight, step)
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1489, in _panic_or_proceed
    raise exc
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 277, in __step
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 114, in arun_with_retry
    await task.proc.ainvoke(task.input, task.config)
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2531, in ainvoke
    input = await step.ainvoke(input, config, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langgraph/utils.py", line 117, in ainvoke
    ret = await asyncio.create_task(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 349, in __wakeup
    future.result()
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 279, in __step
    result = coro.throw(exc)
             ^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 547, in run_in_executor
    return await asyncio.get_running_loop().run_in_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 349, in __wakeup
    future.result()
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/demo.py", line 120, in evaluate_answer_node
    next_action = json.loads(choose_next_action(state))["next_action"]
                             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/tools.py", line 567, in __call__
    return self.run(tool_input, callbacks=callbacks)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/tools.py", line 417, in run
    raise e
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/tools.py", line 406, in run
    parsed_input = self._parse_input(tool_input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/langchain_core/tools.py", line 304, in _parse_input
    result = input_args.parse_obj(tool_input)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 526, in parse_obj
    return cls(**obj)
           ^^^^^^^^^^
  File "/Users/francis/PycharmProjects/langgraph-test/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for choose_next_actionSchema
state
  field required (type=value_error.missing)

Description

Since the function in question is clearly working outside of Langgraph, this appears to be a spurious error. If it is not, the error message does not provide any useful information and in any case is not formatted correctly.

System Info

langgraph 0.0.66 python 3.11 mac os 14.5

francisjervis commented 3 months ago

Here is the whole, non-working function that should be used as a tool.

@tool
def choose_next_action(state):
    """
    This tool evaluates the interviewee's answer for completeness and clarity,
    and decides whether to ask a probing or clarifying question, or move on to the next topic.
    :param state:
    :return:
    """
    print("choose_next_action")
    messages = [SystemMessage(content="You are a sociologist conducting a semi-structured interview. Respond only in valid JSON.")]
    # add last message from state to messages
    print(state["messages"][-1])
    messages.append(state["messages"][-1])
    print(messages)

    tools = [
                {
                    "name": "evaluate_answer",
                    "description": "Decide what to do based on the content of an interviewee's response",
                    "parameters": {
                        "required": [
                            "next_action"
                        ],
                        "properties": {
                            "next_action": {
                                "enum": [
                                    "probing_question",
                                    "clarifying_question",
                                    "next_question"
                                ]
                            }
                        },
                        "type": "object"
                    }
                }
            ]

    functions = [convert_to_openai_function(t) for t in tools]

    print(functions[0])

    # prompt = AIMessagePromptTemplate.from_messages(messages)
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    response = llm.invoke(messages, functions=functions).content
    print(response)

    return response
hinthornw commented 3 months ago

Hi @francisjervis could you share how that tool is being provided to the app / tool node?

A potential cause at a quick glance:

francisjervis commented 3 months ago

I'm not sure what you mean by functions being deprecated - they are passed in the tools parameter of the chat completions API now, but are otherwise very much still there. Semantics aside, I updated the code to strictly comply with the format in OpenAI's docs (adding the "missing" "type": "function" outer layer of JSON, ironically), plus passing tools directly rather than converting to OpenAI functions, and it throws the same error.

@tool
def choose_next_action(state):
    """
    This tool evaluates the interviewee's answer for completeness and clarity,
    and decides whether to ask a probing or clarifying question, or move on to the next topic.
    """
    print("choose_next_action")
    messages = [SystemMessage(content="You are a sociologist conducting a semi-structured interview. Respond only in valid JSON.")]
    # add last message from state to messages
    print(state["messages"][-1])
    messages.append(state["messages"][-1])
    print(messages)

    tools = [
                {
                    "type": "function",
                    "function": {
                        "name": "evaluate_answer",
                        "description": "Decide what to do based on the content of an interviewee's response",
                        "parameters": {
                            "type": "object",
                            "required": [
                                "next_action"
                            ],
                            "properties": {
                                "next_action": {
                                    "enum": [
                                        "probing_question",
                                        "clarifying_question",
                                        "next_question"
                                    ]
                                }
                            },
                        }
                    }
                }
            ]

    print(tools[0])
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    response = llm.invoke(messages, tools=tools).content
    print(response)

    return response
hinthornw commented 3 months ago

You're correct that function calling still is supported by OpenAI! However since they have deprecated it^1, and other providers support the "tool calling" construct, we have designed most of our orchestration logic around the tool calling API. The ToolNode's API relies on tool_calls in the resulting message. While the functions parameter still works, and the openai call would be accepted the location it shows up in messages may be different which could lead to some issues.

Specifically, the ToolNode works as follows:

  1. Selects the last message from the messages in the graph state
  2. Looks at the message.tool_calls field for values to execute
  3. Selects the tools based on tool_call["name"] and runs each tool on tool_call["args"]
  4. For each result, it adds a ToolMessage(content=<the tool call result>, tool_call_id=tool_call["id"]) to the state history

I don't see how you're connecting things in your graph above but two things I notice here that may potentially be relevant:

  1. If choose_next_action is a node in the graph (rather than the schema you're providing to a model), there's no need to use the @tool decorator - the node itself isn't acting as a "tool" for the LLM as far as I can tell from the snippet
  2. The response of the llm should be {"message": [the_ai_message]}rather than the raw content

image