run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.76k stars 5.06k forks source link

[Bug]: ReAct agent function calling always got Thought: (Implicit) I can answer without any more tools! #16276

Open chemistrywow31 opened 14 hours ago

chemistrywow31 commented 14 hours ago

Bug Description

The agent function tools are not working after calling .update_prompts().

Is this an error in my usage, or is it a bug?

Thank you in advance for your response.

Version

llama-index-core-0.11.14

Steps to Reproduce

def test_react_agent():
    from llama_index.llms.azure_openai import AzureOpenAI
    from llama_index.core.memory import ChatMemoryBuffer
    from llama_index.core.agent import ReActAgent
    from llama_index.core.tools import FunctionTool
    from llama_index.core import PromptTemplate
    from datetime import datetime

    memory = ChatMemoryBuffer.from_defaults(token_limit=1500)

    llm = AzureOpenAI(
        model="gpt-4o",
        deployment_name=DEPLOYMENT_NAME,
        api_key=API_KEY,
        azure_endpoint=BASE_URL,
        api_version="2023-07-01-preview",
        temperature=0
    )

    def get_current_date() -> str:
        """ get current date without parameter and return date string with format '%Y-%m-%d' """

        return datetime.now().strftime("%Y-%m-%d")

    tools = [FunctionTool.from_defaults(fn=get_current_date)]

    agent = ReActAgent.from_tools(
        tools=tools,
        llm=llm,
        chat_history=None,
        memory=memory,
        verbose=True,
        callback_manager=llm.callback_manager
    )

    agent.reset()
    llm_response = agent.chat(
        message="May I ask what the date is today?"
    )

    print(llm_response.response)

    sys_prompt = "you are a nice person, please answer the question politely and with a smile."
    react_system_prompt = PromptTemplate(sys_prompt)

    agent.update_prompts({"agent_worker:system_prompt": react_system_prompt})
    agent.reset()

    llm_response = agent.chat(
        message="May I ask what the date is today?"
    )

    print(llm_response.response)

Relevant Logs/Tracbacks

> Running step 96ec3e9c-26a6-4cf9-85dd-f672ce8ae694. Step input: May I ask what the date is today?
Thought: The current language of the user is English. I need to use a tool to help me answer the question.
Action: get_current_date
Action Input: {}
Observation: 2024-09-28
> Running step 662a3626-e279-4fd4-8811-a7c563609a07. Step input: None
Thought: I can answer without using any more tools. I'll use the user's language to answer.
Answer: Today's date is September 28, 2024.
Today's date is September 28, 2024.
> Running step be7e3efe-64b1-46e0-bbe3-ff4a2d29fb0d. Step input: May I ask what the date is today?
Thought: (Implicit) I can answer without any more tools!
Answer: Of course! 😊 Today is October 6, 2023. How can I assist you further?
Of course! 😊 Today is October 6, 2023. How can I assist you further?
dosubot[bot] commented 14 hours ago

Hello, @chemistrywow31! I'm here to help you with any bugs, questions, or contributions. Let's tackle this issue together!

The issue with the ReActAgent function tools not working after calling .update_prompts() in llama-index-core-0.11.14 appears to be a bug. The provided code indicates that the ReActAgent does not explicitly specify prompts and would need a refactor to expose those prompts.

Supporting evidence from the code:

def _get_prompts(self) -> PromptDictType:
    """Get prompts."""
    # TODO: the ReAct formatter does not explicitly specify PromptTemplate
    # objects, but wrap it in this to obey the interface
    sys_header = self._react_chat_formatter.system_header
    return {"system_prompt": PromptTemplate(sys_header)}

This comment suggests that the current implementation does not handle prompts correctly for the ReActAgent, which could lead to issues when calling .update_prompts(). Additionally, the ReActAgentWorker class relies on the ReActChatFormatter to format prompts, but it does not explicitly manage or expose these prompts, which further supports the likelihood of a bug [1][2][3].

For a workaround, you might want to ensure that the system prompt is correctly formatted and updated. Here is how the update_prompts method works internally:

def _update_prompts(self, prompts: PromptDictType) -> None:
    """Update prompts."""
    if "system_prompt" in prompts:
        sys_prompt = cast(PromptTemplate, prompts["system_prompt"])
        self._react_chat_formatter.system_header = sys_prompt.template

This method checks if the prompts dictionary contains a key "system_prompt". If it does, it casts the corresponding value to a PromptTemplate and updates the system_header attribute of the ReActChatFormatter with the template attribute of the PromptTemplate [4].

Additionally, there are specific examples in the codebase that demonstrate the correct usage of update_prompts with ReActAgent. You can refer to the ReAct Agent notebook which showcases the ReAct agent over simple calculator tools. This notebook includes setting up tools, running queries, and customizing prompts [5].

For more advanced usage, you can refer to the Agent Runner with HyDE for PDFs notebook [6].

To continue talking to Dosu, mention @dosu.