Open TechKemon opened 1 month ago
Hey @TechKemon, a couple of things, I would recommend formatting with code blocks in markdown format so it's easier for users to read and follow along.
just from a quick glance, I have a feeling the error has something to do with
workflow.add_conditional_edges(
"router",
lambda x: x,
{
"conversational": "conversational",
"suicide_prevention": "suicide_prevention"
}
)
I would look into formatting conditional edges, your second conditional edge is correct.
if you prefer to use lambda, you need some way for the route_query
function to "save" the response of which route the graph should be directed to, into the state so that the lambda function can call lambda state: state["route"]
to reference it. But honestly, it'll end up being harder to read/understand for someone at first glance.
generally, routing is done like this example in cell 3 (### Router
), might want to look into that approach.
if you're looking for a parallel node execution (unclear with the direction based on your current code), I would look into this how-to.
and also I would direct you to join the community slack for general langgraph questions.
Thanks Shiv for prompt answer. Will try this and let you know
Hey I'm getting these two errors repeatedly: InvalidUpdateError: Expected dict, got conversational and Unhashable Dict
Code:
def route_query(state: State):
messages = state["messages"]
last_message = messages[-1]
# Format the planner prompt
formatted_messages = planner_prompt.format_messages(input=last_message.content)
response = model.invoke(formatted_messages)
print(response)
# Append the response to messages as an AIMessage
state["messages"].append(AIMessage(content=response.content))
# messages = [AIMessage(content=response.content)]
# state["summary"] = response.content
# Determine the route based on the response content
final = response.content.strip().lower()
if "suicide prevention agent" in final:
state["route"] = final
elif "conversational agent" in final:
state["route"] = final
else:
# Handle unexpected cases if necessary
state["route"] = "unknown"
# Return the updated route in the state
return {"messages": response}
workflow.add_node("router", route_query) workflow.add_conditional_edges( "router", lambda state: state.get("route", "unknown"), { "suicide_prevention": "suicide_prevention", "conversational": "conversational", "unknown": END # Or handle 'unknown' as needed } )
the unhasable dict could be due to a return value giving a dict but the conditional expects just a value. And the InvalidUpdateError: Expected dict, got conversational
stems from the lambda usage in the conditional edge.
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from pydantic import BaseModel, Field
from langgraph.graph.message import AnyMessage, add_messages
from typing import Literal, Annotated
from langchain_core.prompts import ChatPromptTemplate
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, List
# Define the state
class State(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
# Initialize OpenAI model
model = llm
# Define prompts
planner_prompt = ChatPromptTemplate.from_messages([
("system", "You are a planner agent that decides which specialized agent to call based on the user's input. If the query indicates a risk of suicide or self-harm, respond with 'suicide_prevention'. Otherwise, respond with 'conversational'."),
("human", "{input}"),
])
conversational_prompt = ChatPromptTemplate.from_messages([
("system", "You are an empathetic conversational agent. Provide supportive responses to help relieve student stress."),
("human", "{input}"),
])
suicide_prevention_prompt = ChatPromptTemplate.from_messages([
("system", "You are a suicide prevention agent. Apply QPR (Question, Persuade, Refer) techniques and refer to trained professionals or suicide prevention helpline. Be extremely cautious and supportive."),
("human", "{input}"),
])
# Define router
def route_query(state: State):
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
route: Literal["conversational", "suicide_prevention"] = Field(
...,
description="Given a user question choose to route it to normal conversation or a suicide prevention.",
)
structured_llm_router = model.with_structured_output(RouteQuery)
question_router = planner_prompt | structured_llm_router
last_message = state["messages"][-1]
resp = question_router.invoke({"input": last_message})
return resp.route
def run_conversational_agent(state: State):
print("Running conversational agent")
convo_model = conversational_prompt | model
response = convo_model.invoke(state["messages"])
return {"messages": response}
def run_suicide_prevention_agent(state: State):
print("Running suicide prevention agent")
concern_model = suicide_prevention_prompt | model
response = concern_model.invoke(state["messages"])
return {"messages": response}
# Create the graph
workflow = StateGraph(State)
# Add nodes
workflow.add_node("conversational", run_conversational_agent)
workflow.add_node("suicide_prevention", run_suicide_prevention_agent)
# Add edges
workflow.add_conditional_edges(
START,
route_query,
{
"conversational": "conversational",
"suicide_prevention": "suicide_prevention"
},
)
workflow.add_edge("conversational", END)
workflow.add_edge("suicide_prevention", END)
# Compile the graph
memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)
# Function to run a conversation turn
def chat(message: str, config: dict):
print("User:", message)
result = graph.invoke({"messages": [HumanMessage(content=message)]}, config=config)
return result["messages"][-1]
config = {"configurable": {"thread_id": "test"}}
response = chat("Hi! I'm feeling really stressed about my exams", config)
print("Bot:", response.content)
response = chat("I don't know if I can handle this stress anymore", config)
print("Bot:", response.content)
here is the adjusted code without any errors.
things I changed:
that should fix all the errors and set you up for flexibility for future iterations with a LangGraphy approach, I didn't use lambda since it adds complexities and is harder to read/follow. I recommend looking into all the links I referenced. They will help you semantically understand the reasoning behind the changes with accepted paradigm approaches.
Getting this error repeatedly: InvalidUpdateError: Expected dict, got conversational
full code from
https://github.com/PeoplePlusAI/Sukoon