langchain-ai / langgraph

Build resilient language agents as graphs.
https://langchain-ai.github.io/langgraph/
MIT License
6.43k stars 1.02k forks source link

Must write to at least one of ['messages', 'next'] error in Langgraph/Supervisor Code #2153

Closed fatih-sarioglu closed 1 week ago

fatih-sarioglu commented 1 week ago

Checked other resources

Example Code

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, BaseMessage
from langchain_openai import ChatOpenAI

from typing import Annotated, Literal, Sequence
from typing_extensions import TypedDict
from pydantic import BaseModel

import functools
import operator

from langgraph.graph import END, StateGraph, START
from langgraph.prebuilt import create_react_agent

from tools import read_pdf, save_json, fetch_document

from dotenv import load_dotenv
import os

load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")

llm = ChatOpenAI(
    model="gpt-4o-mini",
    temperature=0.5,
)

FIRST_DOC_PATH = "docs\sps_101\syllabus\SPS 101 A-B Syllabus Spring 2022.pdf"

def agent_node(state, agent, name):
    result = agent.invoke(state)
    return {
        "messages": [HumanMessage(content=result["messages"][-1].content, name=name)]
    }

members = ["Outliner", "Question Generator", "Answer Generator", "Document Saver"]

system_prompt = (
    "You are a supervisor tasked with managing a conversation between the"
    " following workers:  {members}. Given the following user request,"
    " respond with the worker to act next. Each worker will perform a"
    " task and respond with their results and status. When finished,"
    " respond with FINISH."
)

options =  ["FINISH"] + members

class RouteResponse(BaseModel):
    next_agent: Literal[*options]

prompt = ChatPromptTemplate(
    [
        ("system", system_prompt),
        MessagesPlaceholder(variable_name="messages"),
        (
            "system",
            "Given the conversation above, who should act next?"
            " Or should we FINISH? Select one of: {options}",
        )
    ]
).partial(options=str(options), members=", ".join(members))

def supervisor_agent(state):
    supervisor_chain = prompt | llm.with_structured_output(RouteResponse)
    return supervisor_chain.invoke(state)

class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]
    next: str

outliner_agent = create_react_agent(llm, tools=[read_pdf])
outliner_node = functools.partial(agent_node, agent=outliner_agent, name="Outliner")

question_generator_agent = create_react_agent(llm, tools=[fetch_document])
question_generator_node = functools.partial(agent_node, agent=question_generator_agent, name="Question Generator")

answer_generator_agent = create_react_agent(llm, tools=[])
answer_generator_node = functools.partial(agent_node, agent=answer_generator_agent, name="Answer Generator")

document_saver_agent = create_react_agent(llm, tools=[save_json])
document_saver_node = functools.partial(agent_node, agent=document_saver_agent, name="Document Saver")

workflow = StateGraph(AgentState)
workflow.add_node("Outliner", outliner_agent)
workflow.add_node("Question Generator", question_generator_agent)
workflow.add_node("Answer Generator", answer_generator_agent)
workflow.add_node("Document Saver", document_saver_agent)
workflow.add_node("Supervisor", supervisor_agent)

for member in members:
    workflow.add_edge(member, "Supervisor")

conditional_map = {k: k for k in members}
conditional_map["FINISH"] = END
workflow.add_conditional_edges("Supervisor", lambda x: x["next"], conditional_map)
workflow.add_edge(START, "Supervisor")

graph = workflow.compile()

for s in graph.stream(
    {"messages": [HumanMessage(content=f"Read the syllabus document with path: {FIRST_DOC_PATH} and create an outline for a question, then generate the question using the weekly lecture slides, then, generate an answer for the question, and finally, save the document which contains question and answer to a JSON file.")]},
    {"recursion_limit": 100},
    debug=True
):
    if "__end__" not in s:
        print(s)
        print("----")

Error Message and Stack Trace (if applicable)

No response

Description

My code is 90 percent same as the LangGraph's Supervisor code. I just want to build a simple supervisor based agentic workflow, but I got this error. I couldn't find a solution on other issues. Please help me!

System Info

annotated-types==0.7.0 anyio==4.6.0 asttokens==2.4.1 certifi==2024.8.30 charset-normalizer==3.4.0 colorama==0.4.6 comm==0.2.2 debugpy==1.8.7 decorator==5.1.1 distro==1.9.0 executing==2.1.0 h11==0.14.0 httpcore==1.0.6 httpx==0.27.2 idna==3.10 ipykernel==6.29.5 ipython==8.28.0 jedi==0.19.1 jiter==0.6.1 jsonpatch==1.33 jsonpointer==3.0.0 jupyter_client==8.6.3 jupyter_core==5.7.2 langchain-core==0.3.12 langchain-openai==0.2.3 langgraph==0.2.35 langgraph-checkpoint==2.0.1 langsmith==0.1.132 matplotlib-inline==0.1.7 msgpack==1.1.0 nest-asyncio==1.6.0 openai==1.52.0 orjson==3.10.7 packaging==24.1 parso==0.8.4 platformdirs==4.3.6 prompt_toolkit==3.0.48 psutil==6.1.0 pure_eval==0.2.3 pydantic==2.9.2 pydantic_core==2.23.4 Pygments==2.18.0 pypdf==5.0.1 python-dateutil==2.9.0.post0 python-dotenv==1.0.1 pywin32==308 PyYAML==6.0.2 pyzmq==26.2.0 regex==2024.9.11 requests==2.32.3 requests-toolbelt==1.0.0 six==1.16.0 sniffio==1.3.1 stack-data==0.6.3 tenacity==8.5.0 tiktoken==0.8.0 tornado==6.4.1 tqdm==4.66.5 traitlets==5.14.3 typing_extensions==4.12.2 urllib3==2.2.3 wcwidth==0.2.13

vbarda commented 1 week ago

@fatih-sarioglu the issue is in how you're defining the pydantic model for structured output -- see below how to fix:

class RouteResponse(BaseModel):
    # next_agent: Literal[*options] <-- this won't work unless you change the `AgentState` and rest of the code to use "next_agent" field 
    next: Literal[*options]   # <-- this is the correct way to define it to make the example work

the reason why the code is breaking is because each graph node needs to write an update to at least one of the state keys ("messages" / "next"). in this case, and the supervisor agent node should return a dictionary with the key "next" (i.e. structured output based on RouteResponse)

def supervisor_agent(state):
    supervisor_chain = prompt | llm.with_structured_output(RouteResponse)
    return supervisor_chain.invoke(state)  # <-- this should return {"next": <agent name>}

hope this helps!

fatih-sarioglu commented 1 week ago

Oh, it's a silly mistake of mine. Your solution works. Thank you, man.