langchain-ai / langgraph

Build resilient language agents as graphs.
https://langchain-ai.github.io/langgraph/
MIT License
6.02k stars 953 forks source link

Warning when using the structured output method 'json_schema'. #2064

Open thoffmann-artidis opened 2 hours ago

thoffmann-artidis commented 2 hours ago

Checked other resources

Example Code

import operator
from typing import TypedDict, Annotated, Sequence, Literal

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import BaseMessage
from langgraph.graph import END, StateGraph
from pydantic import BaseModel

llm = ChatOpenAI(model='gpt-4o-mini')

class Decision(BaseModel):
    next: Literal['animals', 'car_dealer']

class AgentState(TypedDict):
    """Defining the internal state structure of the agent graph."""
    messages: Annotated[Sequence[BaseMessage], operator.add]
    next: str

prompt1 = ChatPromptTemplate.from_messages([('placeholder', '{messages}'),
                                           ('system', 'Given the question, which agent should act next?')])

supervisor = prompt1 | llm.with_structured_output(Decision, method="json_schema")

prompt2 = ChatPromptTemplate.from_messages(
        [('system', 'you are an animal expert agent'),
         ('placeholder', '{messages}')])

animal_agent = prompt2 | llm | (lambda x: {'messages': [x.content]})

prompt3 = ChatPromptTemplate.from_messages(
        [('system', 'you are a car expert agent'),
         ('placeholder', '{messages}')])

car_agent = prompt3 | llm | (lambda x: {'messages': [x.content]})

workflow = StateGraph(AgentState)
workflow.add_node("supervisor", supervisor)
workflow.add_node("animals", animal_agent)
workflow.add_node("car_dealer", car_agent)

conditional_map = {k: k for k in ['car_dealer', 'animals']}
conditional_map["FINISH"] = END
workflow.add_conditional_edges("supervisor",
                                lambda x: x["next"],
                                conditional_map)

workflow.set_entry_point("supervisor")

graph = workflow.compile(debug=False)

Error Message and Stack Trace (if applicable)

Failed to use model_dump to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.model_dump() missing 1 required positional argument: 'self'")
Failed to use dict to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.dict() missing 1 required positional argument: 'self'")
Failed to use model_dump to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.model_dump() missing 1 required positional argument: 'self'")
Failed to use dict to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.dict() missing 1 required positional argument: 'self'")

Description

Hi, I'm trying to use OpenAI's structured output feature as a supervisor node in Langgraph. However, when I invoke the graph for the first time, I get the warnings above. They disappear when I invoke the graph a second time. Here's a minimal example to reproduce the issue.

System Info

langchain==0.3.2 langchain-community==0.3.1 langchain-core==0.3.9 langchain-openai==0.2.2 langchain-text-splitters==0.3.0 langgraph==0.2.34

Python 3.12.6 MacOS

hinthornw commented 2 hours ago

Hello! Could you please share your pydantic version as well? Thank you!