Open Nachoeigu opened 2 months ago
I am also working on the similar problem. I have an agent who ask follow-up question from a user using a tool and on CLI, it works fine but in studio, I am getting this error "EOFError: EOF when reading a line"
@vbarda Thank you for your suggestion. I also tried this approach but it is inefficient.
I mean, the interrupts is mainly a functionality for debugging your app (like VS Code Debugger, see how it works step by step).
If I need to remove my human_feedback node for having the end user feedback, I think it would be inefficient because of the need to adjust my code only for the visualization in the IDE.
I think the human in loop feature is not able yet in the LangGraph Studio but if someone knows, let me know :)
@Nachoeigu you would still need to modify the node implementation because input()
is not going to be possible to use with the LangGraph API server that's used with LangGraph Studio
however, you can actually achieve the same behavior in the following way:
(1) update your code to something like this
# this is basically a no-op node
def human_feedback(state):
pass
def should_continue(state):
messages = state['messages']
last_message = messages[-1]
if isinstance(last_message, HumanMessage):
return "agent"
return "end"
workflow.set_entry_point("agent")
workflow.add_node("agent", call_model)
workflow.add_node("human", human_feedback)
workflow.add_edge("agent", "human")
workflow.add_conditional_edges(
"human",
should_continue,
{
"agent": "agent",
"end": END,
},
)
(2) add an interrupt to the human
node in the Studio UI
when the graph stops, head over to the Inputs
section on the bottom left and submit the HumanMessage
. This would update the state in a similar way to what you were doing before
hope this helps! we're also going to add a video to README on this
@vbarda It seems not to work if I pass the function. But I tried with this:
def read_human_feedback(state: MessagesState):
# if state['messages'][-1].tool_calls == []:
# logger.info("AI: \n"+ state['messages'][-1].content)
# user_msg = input("Reply: ")
# return {'messages': [HumanMessage(content = user_msg)]}
# else:
# pass
return {'messages':[HumanMessage(content='')]}
But also, it didnt work with LangGraph API. I recorded my screen for clarification: It generates a fork but in the fork I can not continue in spite of the relationship between nodes should continue:
https://github.com/user-attachments/assets/7cc17c29-ced6-4539-bcad-417301bf29f8
These are the relationships:
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.add_node("human_feedback", read_human_feedback)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
"agent",
should_continue,
{"human_feedback": 'human_feedback',
"tools": "tools"}
)
workflow.add_conditional_edges(
"human_feedback",
should_continue_with_feedback,
{"agent": 'agent',
"end": END}
)
workflow.add_edge("tools", 'agent')
workflow.add_edge("agent", 'human_feedback')
checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)
Why do you have workflow.add_edge("agent", 'human_feedback')
? Is this needed? Dont you already have the conditional edge which goes from agent to either human_feedback or tools?
Why do you have
workflow.add_edge("agent", 'human_feedback')
? Is this needed? Dont you already have the conditional edge which goes from agent to either human_feedback or tools?
Yes, adjusted. It wasn't needed. :)
About, what I mentioned, I think it is not yet implemented or maybe a bug. But if you fork a message, you can not continue the flow as I provided in the video. (or maybe I m wrong, I don´t know).
You see, it should go to the agent as it highlights in the main branch:
But, if I forks it. I can not continue:
For better clarification, I will provide the agent.py file that feeds the UI:
import os
from dotenv import load_dotenv
import sys
load_dotenv()
WORKDIR=os.getenv("WORKDIR")
os.chdir(WORKDIR)
sys.path.append(WORKDIR)
from typing import Annotated, Literal, TypedDict
import json
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langchain_google_genai.chat_models import ChatGoogleGenerativeAI
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
from typing import TypedDict, Annotated, List, Dict
from langchain_core.messages import AnyMessage, HumanMessage, AIMessage, ToolMessage
import operator
from src.vector_database.utils import PineconeManagment
import logging
import logging_config
logger = logging.getLogger(__name__)
#This is for the RAG phase of the app
def format_retrieved_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
pinecone_conn = PineconeManagment()
pinecone_conn.loading_vdb(index_name = 'ovidedentalclinic')
retriever = pinecone_conn.vdb.as_retriever(search_type="similarity",
search_kwargs={"k": 2})
rag_chain = retriever | format_retrieved_docs
class MessagesState(TypedDict):
messages: Annotated[List[AnyMessage], operator.add]
#All the tools to consider
@tool
def check_availability(desired_date:str, specialization:str):
"""Checking the database if the doctor has availability"""
return True
@tool
def reschedule_appointment(old_date:str, new_date:str, dni_number:int, doctor_name:str):
"""Rescheduling an appointment"""
return True
@tool
def cancel_appointment(date:str, dni_number:int, doctor_name:str):
"""Canceling an appointment"""
return True
@tool
def get_catalog_specialists():
"""Obtain information about the doctors and specializations/services we provide"""
with open(f"{WORKDIR}/data/catalog.json","r") as file:
file = json.loads(file.read())
return file
@tool
def set_appointment(date:str, dni_number:int, specialization:str):
"""Set appointment with the doctor"""
return True
@tool
def check_results(dni_number:int):
"""Check if the result of the pacient is available"""
return True
@tool
def reminder_appointment(dni_number:int):
"""Returns when the pacient has its appointment with the doctor"""
return "You have for next monday at 7 am"
@tool
def retrieve_faq_info(question:str):
"""Retrieve documents from general questions about the medical clinic"""
return rag_chain.invoke(question)
tools = [cancel_appointment, get_catalog_specialists, retrieve_faq_info, set_appointment, reminder_appointment, check_availability, check_results,reschedule_appointment, reschedule_appointment]
tool_node = ToolNode(tools)
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
#model = ChatGoogleGenerativeAI(model = 'gemini-1.5-pro-exp-0801', temperature = 0)
model = model.bind_tools(tools = tools)
from datetime import datetime
def should_continue(state: MessagesState) -> Literal["tools", "human_feedback"]:
messages = state['messages']
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return "human_feedback"
def should_continue_with_feedback(state: MessagesState) -> Literal["agent", "end"]:
messages = state['messages']
last_message = messages[-1]
if isinstance(last_message, HumanMessage):
return "agent"
return "end"
def call_model(state: MessagesState):
messages = state['messages']
response = model.invoke(messages)
return {"messages": [response]}
#The commented part is because it breaks the UI with the input function
def read_human_feedback(state: MessagesState):
# if state['messages'][-1].tool_calls == []:
# logger.info("AI: \n"+ state['messages'][-1].content)
# user_msg = input("Reply: ")
# return {'messages': [HumanMessage(content = user_msg)]}
# else:
# pass
return {'messages':[HumanMessage(content='')]}
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.add_node("human_feedback", read_human_feedback)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
"agent",
should_continue,
{"human_feedback": 'human_feedback',
"tools": "tools"}
)
workflow.add_conditional_edges(
"human_feedback",
should_continue_with_feedback,
{"agent": 'agent',
"end": END}
)
workflow.add_edge("tools", 'agent')
checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)
Thank you @hwchase17 for the amazing job you and the team are making for the community. Keep it going!
hmmm @Nachoeigu how did you fork it? would you be able to record a video?
hmmm @Nachoeigu how did you fork it? would you be able to record a video?
@hwchase17 Here it is:
https://github.com/user-attachments/assets/7c8479fa-4010-4464-8c77-4a7ed0c371eb
any update on fixing human in the loop for langgraph cloud?
Hi! I was testing out the new feature.
I don´t know if it is possible yet. But I would like to know how to integrate human in loop directly in LangGraph Studio.
The node with the human in loop logic:
` def read_human_feedback(state: MessagesState): if state['messages'][-1].tool_calls == []: print("AI: \n"+ state['messages'][-1].content) user_msg = input("Reply: ") return {'messages': [HumanMessage(content = user_msg)]} else: pass
def should_continue(state: MessagesState) -> Literal["agent", "end"]: messages = state['messages'] last_message = messages[-1] if isinstance(last_message, HumanMessage): return "agent" return "end"
workflow.add_conditional_edges( "human_feedback", should_continue, {"agent": 'agent', "end": END} ) `
It works if I run my python file, but in LangGraph Studio, it raises an EOF error because of the input function. I think it is because the UI doesn´t support that way for interacting with the code.
Thank you!