Closed francisjervis closed 2 months ago
Here is the whole, non-working function that should be used as a tool.
@tool
def choose_next_action(state):
"""
This tool evaluates the interviewee's answer for completeness and clarity,
and decides whether to ask a probing or clarifying question, or move on to the next topic.
:param state:
:return:
"""
print("choose_next_action")
messages = [SystemMessage(content="You are a sociologist conducting a semi-structured interview. Respond only in valid JSON.")]
# add last message from state to messages
print(state["messages"][-1])
messages.append(state["messages"][-1])
print(messages)
tools = [
{
"name": "evaluate_answer",
"description": "Decide what to do based on the content of an interviewee's response",
"parameters": {
"required": [
"next_action"
],
"properties": {
"next_action": {
"enum": [
"probing_question",
"clarifying_question",
"next_question"
]
}
},
"type": "object"
}
}
]
functions = [convert_to_openai_function(t) for t in tools]
print(functions[0])
# prompt = AIMessagePromptTemplate.from_messages(messages)
llm = ChatOpenAI(model="gpt-3.5-turbo")
response = llm.invoke(messages, functions=functions).content
print(response)
return response
Hi @francisjervis could you share how that tool is being provided to the app / tool node?
A potential cause at a quick glance:
ToolNode
looks for tool invocations in the AI response (rather than function calls), which could mean that the function arguments are being parsed as emptyI'm not sure what you mean by functions being deprecated - they are passed in the tools
parameter of the chat completions API now, but are otherwise very much still there. Semantics aside, I updated the code to strictly comply with the format in OpenAI's docs (adding the "missing" "type": "function"
outer layer of JSON, ironically), plus passing tools
directly rather than converting to OpenAI functions, and it throws the same error.
@tool
def choose_next_action(state):
"""
This tool evaluates the interviewee's answer for completeness and clarity,
and decides whether to ask a probing or clarifying question, or move on to the next topic.
"""
print("choose_next_action")
messages = [SystemMessage(content="You are a sociologist conducting a semi-structured interview. Respond only in valid JSON.")]
# add last message from state to messages
print(state["messages"][-1])
messages.append(state["messages"][-1])
print(messages)
tools = [
{
"type": "function",
"function": {
"name": "evaluate_answer",
"description": "Decide what to do based on the content of an interviewee's response",
"parameters": {
"type": "object",
"required": [
"next_action"
],
"properties": {
"next_action": {
"enum": [
"probing_question",
"clarifying_question",
"next_question"
]
}
},
}
}
}
]
print(tools[0])
llm = ChatOpenAI(model="gpt-3.5-turbo")
response = llm.invoke(messages, tools=tools).content
print(response)
return response
You're correct that function calling still is supported by OpenAI! However since they have deprecated
it^1, and other providers support the "tool calling" construct, we have designed most of our orchestration logic around the tool calling API. The ToolNode's API relies on tool_calls
in the resulting message. While the functions
parameter still works, and the openai call would be accepted the location it shows up in messages may be different which could lead to some issues.
Specifically, the ToolNode works as follows:
messages
in the graph statemessage.tool_calls
field for values to executetool_call["name"]
and runs each tool on tool_call["args"]
ToolMessage(content=<the tool call result>, tool_call_id=tool_call["id"])
to the state historyI don't see how you're connecting things in your graph above but two things I notice here that may potentially be relevant:
choose_next_action
is a node in the graph (rather than the schema you're providing to a model), there's no need to use the @tool
decorator - the node itself isn't acting as a "tool" for the LLM as far as I can tell from the snippet{"message": [the_ai_message]}
rather than the raw content
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
Since the function in question is clearly working outside of Langgraph, this appears to be a spurious error. If it is not, the error message does not provide any useful information and in any case is not formatted correctly.
System Info
langgraph 0.0.66 python 3.11 mac os 14.5