langchain-ai / langgraph-studio

Desktop app for prototyping and debugging LangGraph applications locally.
https://studio.langchain.com
1.87k stars 119 forks source link

DOC: langgraph cli crashing on hierarchical teams supervisor function #170

Open Archisketch-Luke opened 2 weeks ago

Archisketch-Luke commented 2 weeks ago

Issue with current documentation:

https://langchain-ai.github.io/langgraph/tutorials/multi_agent/hierarchical_agent_teams/

I had a project that implements the supervisor logic from the above document. I notice that langgraph cli/debugger crashes on the supervisor graph after it started supporting subgraph feature. Through some investigation I found out that it is a problem originating from def create_team_supervisor(llm: ChatOpenAI, system_prompt, members). I am suspecting it's from the using deprecated feature, llm.bind_functions(functions=[function_def], function_call="route"). Can somebody please look into this issue? The following is the screen I see when I start the debugger and cli

image image

Would this be a problem from the debugger or the code?

Idea or request for content:

No response

Archisketch-Luke commented 2 weeks ago

Just for the record, it seems like the API is working without any problem. Just debugger/cli crashing when I try to look at the supervisor graph. Other graph shows its structure without a problem in the cli.

Archisketch-Luke commented 2 weeks ago

UPDATE: I've tried changing bind_functions to bind_tools as recommended in the API doc. The issue persists.

JayeShen commented 1 week ago
image

同样的,我怀疑是使用supervisor导致的,本地生成图片没有任何问题。我使用了下面代码,def create_team_supervisor(llm: ChatOpenAI, system_prompt, members) -> str: """An LLM-based router.""" options = ["FINISH"] + members function_def = { "name": "route", "description": "Select the next role.", "parameters": { "title": "routeSchema", "type": "object", "properties": { "next": { "title": "Next", "anyOf": [ {"enum": options}, ], }, }, "required": ["next"], }, } prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), MessagesPlaceholder(variable_name="messages"), ( "system", "Given the conversation above, who should act next?" " Or should we FINISH? Select one of: {options}", ), ] ).partial(options=str(options), team_members=", ".join(members)) return ( prompt | trimmer | llm.bind_functions(functions=[function_def], function_call="route") | JsonOutputFunctionsParser() )