langchain-ai / langgraph

Build resilient language agents as graphs.
https://langchain-ai.github.io/langgraph/
MIT License
5.95k stars 934 forks source link

Langgraph's Question on Function_call in OpenAI #486

Closed 147258369777 closed 1 month ago

147258369777 commented 4 months ago

Checked other resources

Example Code

prompt = hub.pull("hwchase17/openai-functions-agent")
# Choose the LLM that will drive the agent
llm = ChatOpenAI(model="gpt-4-turbo-preview", openai_api_key=openai_api_key, openai_api_base=openai_api_base)
tools = search_tools
# 可以执行Python代码
python_repl_tool = PythonREPLTool()

def create_agent(llm, tools, system_prompt):
    # 每个工作的节点agent都有一个名字和一些工具
    prompt = ChatPromptTemplate.from_messages(
        [
            (
                "system",
                system_prompt
            ),
            MessagesPlaceholder(variable_name="messages"),
            MessagesPlaceholder(variable_name="agent_scratchpad"),
        ]
    )
    agent = create_openai_tools_agent(llm, tools, prompt)
    executor = AgentExecutor(agent=agent, tools=tools)
    return executor

def agent_node(state, agent, name):
    result = agent.invoke(state)
    return {"messages": [HumanMessage(content=result["output"], name=name)]}

# 定义agent supervisor
members = ["Researcher", "Coder"]
system_prompt = (
    "You are a supervisor tasked with managing a conversation between the"
    " following workers:  {members}. Given the following user request,"
    " respond with the worker to act next. Each worker will perform a"
    " task and respond with their results and status. When finished,"
    " respond with FINISH."
)
options = ["FINISH"] + members
# 使用openAI的function_call
function_def = {
    "name": "route",
    "description": "Select the next role.",
    "parameters": {
        "title": "routeSchema",
        "type": "object",
        "properties": {
            "next": {
                "title": "Next",
                "anyOf": [
                    {"enum": options},
                ],
            }
        },
        "required": ["next"],
    },
}
supervisor_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system_prompt),
        MessagesPlaceholder(variable_name="messages"),
        (
            "system",
            "Given the conversation above, who should act next?"
            " Or should we FINISH? Select one of: {options}",
         ),
    ]
).partial(options=str(options),members=",".join(members))
supervisor_chain=supervisor_prompt \
                 | llm.bind_functions(functions=[function_def],function_call="route") \
                 | JsonOutputFunctionsParser()
# 构建图
class AgentState(TypedDict):
    # The annotation tells the graph that new messages will always
    # be added to the current states
    messages: Annotated[Sequence[BaseMessage], operator.add]
    # The 'next' field indicates where to route to next
    next: str
researche_agent=create_agent(llm,search_tools,"You are a web researcher.")
# from functools import partial
# def power(base, exponent):
#     return base ** exponent
# # 创建一个偏函数,固定base为2
# square = partial(power, base=2)
# print(square(3))  # 输出8,相当于调用了power(2, 3)
research_node=functools.partial(agent_node,agent=researche_agent,name="Researcher")
code_agent = create_agent(
    llm,
    [python_repl_tool],
    "You may generate safe python code to analyze data and generate charts using matplotlib.",
)
code_node = functools.partial(agent_node, agent=code_agent, name="Coder")
workflow = StateGraph(AgentState)
workflow.add_node("Researcher", research_node)
workflow.add_node("Coder", code_node)
workflow.add_node("supervisor", supervisor_chain)
for member in members:
    # worker执行完之后返回supervisor节点
    workflow.add_edge(member, "supervisor")
conditional_map = {k: k for k in members}
conditional_map["FINISH"] = END
workflow.add_conditional_edges("supervisor",lambda x: x["next"],conditional_map)
workflow.set_entry_point("supervisor")
graph = workflow.compile()
for s in graph.stream(
    {
        "messages": [
            HumanMessage(content="Code hello world and print it to the terminal")
        ]
    }
):
    if "__end__" not in s:
        print(s)
        print("----")

Error Message and Stack Trace (if applicable)

Error Message and Stack Trace (if applicable)

raise OutputParserException(f"Could not parse function call: {exc}") langchain_core.exceptions.OutputParserException: Could not parse function call: 'function_call'

Description

This is a problem that I encountered while studying Langgraph's multi-agent blog. This problem not only occurs here, but also when I was studying Langgraph's Planning Agent Examples module

System Info

Latest version

shuqingjinse commented 4 months ago

bind_functions 未生效?

147258369777 commented 4 months ago

bind_functions 未生效? 就挺奇怪的刚刚能用,现在又出现这种错误了

gw00295510 commented 4 months ago

请问有办法让supervisor不止返回next的决定,同时返回做决定谁是next的思考过程吗?

Any idea how to not only let supervior return who should act next, but return the thinking steps why the next role is chosen?

rabader commented 4 months ago

This example uses the JsonOutputFunctionsParser:

from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser

Is the problem with the parser, or is the example in these notebooks using the wrong type of parser? I also am having issues running those example notebooks under langgraph/examples/multi_agent. I consistently get this same error.

aimi0914 commented 3 months ago

我也遇到同样的问题,https://github.com/QwenLM/Qwen2/issues/568 还以为是我用的qwen模型的问题,结果我换成openai的key也不行,未调用function

hinthornw commented 3 months ago

你使用哪种integration来运行qwen?VLLM吗?

aimi0914 commented 3 months ago

你使用哪种integration来运行qwen?VLLM吗?

是的,有关系么,后来换成chatgpt3.5的api,可以function_call了,不过时好时坏,概率事件。。

gw00295510 commented 3 months ago

你使用哪种integration来运行qwen?VLLM吗?

是的,有关系么,后来换成chatgpt3.5的api,可以function_call了,不过时好时坏,概率事件。。

我用的gpt4的api,用ChatOpenAI类包了一层,在bind_function时有个参数叫‘function_call’,三种传参方式,{'none','auto','方法名'}。none,百分百不调用。auto,LLM自行决定是否调用。方法名,百分百调用‘方法名’。

yebanliuying commented 2 months ago

me too, that's a problem image

gw00295510 commented 2 months ago

me too, that's a problem image

这个报错是大模型根本没有调用function_call,还是用LLM自身的语言能力回答了你。

我已找到解决办法。

如果你跟我一样是用Langchain的ChatOpenAI类包了一层,那么你要进入这个类的代码,在_stream或者_generate方法里面,指定function参数为function_lst。bind_functions、bind_tools仅兼容支持OPENAI function calling API的模型,很明显,用千问什么的肯定是不支持的。

def _generate(
    self,
    messages: List[BaseMessage],
    stop: Optional[List[str]] = None,
    run_manager: Optional[CallbackManagerForLLMRun] = None,
    stream: Optional[bool] = None,
    **kwargs: Any,
) -> ChatResult:
    should_stream = stream if stream is not None else self.streaming
    if should_stream:
        stream_iter = self._stream(
            messages, stop=stop, run_manager=run_manager, **kwargs
        )
        return generate_from_stream(stream_iter)
    message_dicts, params = self._create_message_dicts(messages, stop)
    params = {
        **params,
        **({"stream": stream} if stream is not None else {}),
        **kwargs,

    }

    functions = [
        {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        }
    ]

    formatted_functions = [convert_to_openai_function(fn) for fn in functions]

    response = self.client.create(messages=message_dicts,functions=formatted_functions , **params)
    # response = self.client.create(messages=message_dicts, **params)
    return self._create_chat_result(response)
TC2022lxf commented 2 months ago

me too, that's a problem image

这个报错是大模型根本没有调用function_call,还是用LLM自身的语言能力回答了你。

我已找到解决办法。

如果你跟我一样是用Langchain的ChatOpenAI类包了一层,那么你要进入这个类的代码,在_stream或者_generate方法里面,指定function参数为function_lst。bind_functions、bind_tools仅兼容支持OPENAI function calling API的模型,很明显,用千问什么的肯定是不支持的。

def _generate(
    self,
    messages: List[BaseMessage],
    stop: Optional[List[str]] = None,
    run_manager: Optional[CallbackManagerForLLMRun] = None,
    stream: Optional[bool] = None,
    **kwargs: Any,
) -> ChatResult:
    should_stream = stream if stream is not None else self.streaming
    if should_stream:
        stream_iter = self._stream(
            messages, stop=stop, run_manager=run_manager, **kwargs
        )
        return generate_from_stream(stream_iter)
    message_dicts, params = self._create_message_dicts(messages, stop)
    params = {
        **params,
        **({"stream": stream} if stream is not None else {}),
        **kwargs,

    }

    functions = [
        {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        }
    ]

    formatted_functions = [convert_to_openai_function(fn) for fn in functions]

    response = self.client.create(messages=message_dicts,functions=formatted_functions , **params)
    # response = self.client.create(messages=message_dicts, **params)
    return self._create_chat_result(response)

'ChatOpenAI' object has no attribute '_create_message_dicts' 为什么我报错这个呀

waywooKwong commented 1 month ago

我也遇到同样的问题,QwenLM/Qwen2#568 还以为是我用的qwen模型的问题,结果我换成openai的key也不行,未调用function

i found bind_functions only supported in ChatOpenAI ?

vbarda commented 1 month ago

I wouldn't recommend using .bind_functions and instead use .bind_tools. We're also working on removing any mentions of .bind_functions from LangGraph documentation, so going to close this issue.