[X] I added a very descriptive title to this issue.
[X] I searched the LangGraph/LangChain documentation with the integrated search.
[X] I used the GitHub search to find a similar question and didn't find it.
[X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
[X] I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.
Example Code
import operator
from typing import TypedDict, Annotated, Sequence, Literal
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import BaseMessage
from langgraph.graph import END, StateGraph
from pydantic import BaseModel
llm = ChatOpenAI(model='gpt-4o-mini')
class Decision(BaseModel):
next: Literal['animals', 'car_dealer']
class AgentState(TypedDict):
"""Defining the internal state structure of the agent graph."""
messages: Annotated[Sequence[BaseMessage], operator.add]
next: str
prompt1 = ChatPromptTemplate.from_messages([('placeholder', '{messages}'),
('system', 'Given the question, which agent should act next?')])
supervisor = prompt1 | llm.with_structured_output(Decision, method="json_schema")
prompt2 = ChatPromptTemplate.from_messages(
[('system', 'you are an animal expert agent'),
('placeholder', '{messages}')])
animal_agent = prompt2 | llm | (lambda x: {'messages': [x.content]})
prompt3 = ChatPromptTemplate.from_messages(
[('system', 'you are a car expert agent'),
('placeholder', '{messages}')])
car_agent = prompt3 | llm | (lambda x: {'messages': [x.content]})
workflow = StateGraph(AgentState)
workflow.add_node("supervisor", supervisor)
workflow.add_node("animals", animal_agent)
workflow.add_node("car_dealer", car_agent)
conditional_map = {k: k for k in ['car_dealer', 'animals']}
conditional_map["FINISH"] = END
workflow.add_conditional_edges("supervisor",
lambda x: x["next"],
conditional_map)
workflow.set_entry_point("supervisor")
graph = workflow.compile(debug=False)
Error Message and Stack Trace (if applicable)
Failed to use model_dump to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.model_dump() missing 1 required positional argument: 'self'")
Failed to use dict to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.dict() missing 1 required positional argument: 'self'")
Failed to use model_dump to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.model_dump() missing 1 required positional argument: 'self'")
Failed to use dict to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.dict() missing 1 required positional argument: 'self'")
Description
Hi, I'm trying to use OpenAI's structured output feature as a supervisor node in Langgraph. However, when I invoke the graph for the first time, I get the warnings above. They disappear when I invoke the graph a second time. Here's a minimal example to reproduce the issue.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
Hi, I'm trying to use OpenAI's structured output feature as a supervisor node in Langgraph. However, when I invoke the graph for the first time, I get the warnings above. They disappear when I invoke the graph a second time. Here's a minimal example to reproduce the issue.
System Info
langchain==0.3.2 langchain-community==0.3.1 langchain-core==0.3.9 langchain-openai==0.2.2 langchain-text-splitters==0.3.0 langgraph==0.2.34
Python 3.12.6 MacOS