langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
93.3k stars 15.01k forks source link

AIMessage played before invoking a tool is not registered in the Agent memory #22357

Closed Benjaminrivard closed 4 months ago

Benjaminrivard commented 4 months ago

Checked other resources

Example Code

Creation of the Agent :

class Agent:
  def __init__(self, tools: list[BaseTool], prompt: ChatPromptTemplate) -> None:
    self.llm = ChatOpenAI(
                streaming=True,
                model="gpt-4o",
                temperature=0.01,
            )

    self.history = ChatMessageHistory()

    agent = create_openai_tools_agent(llm=self.llm, tools=tools, prompt=prompt)
    self.agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=False)
    self.agent_with_chat_history = RunnableWithMessageHistory(
        self.agent_executor,
        lambda session_id: self.history,
        input_messages_key="input",
        history_messages_key="chat_history",
    ).with_config({"run_name": "Agent"})

    async def send(self, message: str, session_id: str):
        """
        Send a message for the given conversation

        Args:
            message (str):
            session_id (str): _description_
        """
        try:
            async for event in self.agent_with_chat_history.astream_events(
                {"input": message},
                config={"configurable": {"session_id": session_id}},
                version="v1",
            ):
                kind = event["event"]
                if kind != StreamEventName.on_chat_model_stream:
                    logger.debug(event)
                if kind == StreamEventName.on_chain_start:
                    # self.latency_monitorer.report_event("Chain start")
                    if (
                        event["name"] == "Agent"
                    ):  # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
                        logger.debug(
                            f"Starting agent: {event['name']} with input: {event['data'].get('input')}"
                        )
                elif kind == StreamEventName.on_chain_end:
                    # self.latency_monitorer.report_event("Chain end")
                    if (
                        event["name"] == "Agent"
                    ):  # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
                        logger.debug("--")
                        logger.debug(
                            f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}"
                        )
                if kind == StreamEventName.on_chat_model_stream:
                    content = event["data"]["chunk"].content
                    if content:
                        logger.debug(content)
                elif kind == StreamEventName.on_tool_start:
                    logger.debug("--")
                    logger.debug(
                        f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
                    )
                elif kind == "on_tool_stream":
                    pass
                elif kind == StreamEventName.on_tool_end:
                    logger.debug(f"Done tool: {event['name']}")
                    logger.debug(f"Tool output was: {event['data'].get('output')}")
                    logger.debug("--")

        except Exception as err:
            logger.error(err)

Defyining the prompt :

system = f"""You are a friendly robot that gives informations about the weather. Always notice the person that you are talking when you are going to call a tool that they might need to wait a little bit
"""

@tool
def get_weather(location: str):
    """retrieve the weather for the given location"""
    return "bad weather"

class Dialog: 
  def __init__():
    self.prompt = ChatPromptTemplate.from_messages(
            [
                (
                    "system",
                    system,
                ),
                ("placeholder", "{chat_history}"),
                ("human", "{input}"),
                ("placeholder", "{agent_scratchpad}"),
            ]
        )
    self.agent = Agent(
            [get_weather],
            self.prompt,
        )
    self.session_id = "foo"
    await self.agent.send(
        "What's the weather like in Brest ?",
        self.session_id,
    )
    print(self.agent.history.json())

Error Message and Stack Trace (if applicable)

2024-05-31 12:36:00.593 | DEBUG    | agent:send:85 - Starting agent: Agent with input: {'input': "What's the weather like in Brest ?"}
Parent run 5d91f12e-1744-4a05-b1c1-04c7b3f6ba6f not found for run d80a4e7d-daf5-4367-b1f0-393a45c89d93. Treating as a root run.
2024-05-31 12:36:01.503 | DEBUG    | agent:send:110 - Sure
2024-05-31 12:36:01.515 | DEBUG    | agent:send:110 - ,
2024-05-31 12:36:01.545 | DEBUG    | agent:send:110 -  let
2024-05-31 12:36:01.556 | DEBUG    | agent:send:110 -  me
2024-05-31 12:36:01.563 | DEBUG    | agent:send:110 -  check
2024-05-31 12:36:01.572 | DEBUG    | agent:send:110 -  the
2024-05-31 12:36:01.579 | DEBUG    | agent:send:110 -  weather
2024-05-31 12:36:01.588 | DEBUG    | agent:send:110 -  in
2024-05-31 12:36:01.595 | DEBUG    | agent:send:110 -  Brest
2024-05-31 12:36:01.622 | DEBUG    | agent:send:110 -  for
2024-05-31 12:36:01.630 | DEBUG    | agent:send:110 -  you
2024-05-31 12:36:01.653 | DEBUG    | agent:send:110 - .
2024-05-31 12:36:01.661 | DEBUG    | agent:send:110 -  This
2024-05-31 12:36:01.693 | DEBUG    | agent:send:110 -  might
2024-05-31 12:36:01.702 | DEBUG    | agent:send:110 -  take
2024-05-31 12:36:01.714 | DEBUG    | agent:send:110 -  a
2024-05-31 12:36:01.722 | DEBUG    | agent:send:110 -  little
2024-05-31 12:36:01.802 | DEBUG    | agent:send:110 -  bit
2024-05-31 12:36:01.810 | DEBUG    | agent:send:110 - ,
2024-05-31 12:36:01.819 | DEBUG    | agent:send:110 -  so
2024-05-31 12:36:01.827 | DEBUG    | agent:send:110 -  please
2024-05-31 12:36:01.836 | DEBUG    | agent:send:110 -  bear
2024-05-31 12:36:01.846 | DEBUG    | agent:send:110 -  with
2024-05-31 12:36:02.101 | DEBUG    | agent:send:110 -  me
2024-05-31 12:36:02.111 | DEBUG    | agent:send:110 - .
2024-05-31 12:36:02.303 | DEBUG    | agent:send:112 - --
2024-05-31 12:36:02.303 | DEBUG    | agent:send:113 - Starting tool: get_weather with inputs: {'location': 'Brest'}
2024-05-31 12:36:02.312 | DEBUG    | agent:send:119 - Done tool: get_weather
2024-05-31 12:36:02.313 | DEBUG    | agent:send:120 - Tool output was: bad weather
2024-05-31 12:36:02.313 | DEBUG    | agent:send:121 - --
2024-05-31 12:36:03.341 | DEBUG    | agent:send:110 - The
2024-05-31 12:36:03.387 | DEBUG    | agent:send:110 -  weather
2024-05-31 12:36:03.400 | DEBUG    | agent:send:110 -  in
2024-05-31 12:36:03.446 | DEBUG    | agent:send:110 -  Brest
2024-05-31 12:36:03.456 | DEBUG    | agent:send:110 -  is
2024-05-31 12:36:03.507 | DEBUG    | agent:send:110 -  currently
2024-05-31 12:36:03.520 | DEBUG    | agent:send:110 -  bad
2024-05-31 12:36:03.542 | DEBUG    | agent:send:110 - .
2024-05-31 12:36:03.556 | DEBUG    | agent:send:110 -  If
2024-05-31 12:36:03.623 | DEBUG    | agent:send:110 -  you
2024-05-31 12:36:03.632 | DEBUG    | agent:send:110 -  need
2024-05-31 12:36:03.671 | DEBUG    | agent:send:110 -  more
2024-05-31 12:36:03.684 | DEBUG    | agent:send:110 -  specific
2024-05-31 12:36:03.698 | DEBUG    | agent:send:110 -  details
2024-05-31 12:36:03.731 | DEBUG    | agent:send:110 - ,
2024-05-31 12:36:03.742 | DEBUG    | agent:send:110 -  feel
2024-05-31 12:36:03.773 | DEBUG    | agent:send:110 -  free
2024-05-31 12:36:03.787 | DEBUG    | agent:send:110 -  to
2024-05-31 12:36:03.819 | DEBUG    | agent:send:110 -  ask
2024-05-31 12:36:03.832 | DEBUG    | agent:send:110 - !
2024-05-31 12:36:03.948 | DEBUG    | agent:send:93 - --
2024-05-31 12:36:03.948 | DEBUG    | agent:send:94 - Done agent: Agent with output: The weather in Brest is currently bad. If you need more specific details, feel free to ask!
{"messages": [{"content": "What's the weather like in Brest ?", "additional_kwargs": {}, "response_metadata": {}, "type": "human", "name": null, "id": null, "example": false}, {"content": "The weather in Brest is currently bad. If you need more specific details, feel free to ask!", "additional_kwargs": {}, "response_metadata": {}, "type": "ai", "name": null, "id": null, "example": false, "tool_calls": [], "invalid_tool_calls": []}]}

Description

What is currently happening :

What I expect to happen:

Please let me know if anything is unclear or if the problem lies with my implementation.

Thanks in advance,

System Info


System Information
------------------
> OS:  Linux
> OS Version:  #1 SMP Thu Jan 11 04:09:03 UTC 2024
> Python Version:  3.12.1 (main, Mar 26 2024, 17:07:43) [GCC 11.4.0]

Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.20
> langchain_community: 0.0.38
> langsmith: 0.1.63
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.0.2

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
> langserve
keenborder786 commented 4 months ago

So the way history works is that it will only store the following pair: HumanMessage(The initial input) and AIMessage(The final output). All the intermediate steps will not be stored in the Agent history.

Benjaminrivard commented 4 months ago

@keenborder786 Thank you very much for your response. It would be great to have the ability to configure this behavior so intermediate step messages are also saved in the history, because they are actually part of the dialogue.

I will try to make a workarround using the "on_chain_end" event in the meantime.