run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.36k stars 4.67k forks source link

[Question]: Can ReAct Agent reasoning without tools #14343

Open nanyoullm opened 1 week ago

nanyoullm commented 1 week ago

Question Validation

Question

I am trying the ReActAgent example provided at this link: https://docs.llamaindex.ai/en/stable/examples/agent/react_agent/?h=react. I attempted to comment out one tool, hoping that the Agent would use the large model's inference capabilities when tools are insufficient. Additionally, I added tool_choice='auto' in the .chat method. I choose LlamaAPI as my llm.

my code:


from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
from llama_index.core.tools import BaseTool, FunctionTool
from llama_index.llms.llama_api import LlamaAPI
from llama_index.llms.groq import Groq

import os 
os.environ["http_proxy"] = "http://127.0.0.1:10887"
os.environ["https_proxy"] = "http://127.0.0.1:10887"
os.environ["GROQ_API_KEY"] = ""

def multiply(a: int, b: int) -> int:
    """Multiply two integers and returns the result integer"""
    return a * b

multiply_tool = FunctionTool.from_defaults(fn=multiply)

# def add(a: int, b: int) -> int:
#     """Add two integers and returns the result integer"""
#     return a + b

# add_tool = FunctionTool.from_defaults(fn=add)

llm2 = LlamaAPI(model='llama3-70b', api_key='')

agent = ReActAgent.from_tools([multiply_tool], llm=llm2, verbose=True)
response = agent.chat("What is 20+(2*4)? Calculate step by step ", tool_choice='auto')
print('end')

the log

Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: multiply
Action Input: {'a': 2, 'b': 4}
Observation: 8
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
Thought: I have the result of the multiplication, now I need to add 20 to it.
Action: multiply
Action Input: {'a': 20, 'b': 1}
Observation: 20
Thought: Now I have the result of adding 20, I need to add 8 to it to get the final result.
Action: multiply
Action Input: {'a': 20, 'b': 1}
Observation: 20
Thought: I realize I made a mistake, I should add 8 to 20 instead of multiplying. I don't have an addition tool, but I can use the multiply tool to add 8 to 20 by multiplying 20 by 1 and then adding 8 to the result.
Action: multiply
Action Input: {'a': 8, 'b': 1}
Observation: 8
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.

So how should I configure it to leverage the inference capabilities of the large model itself when the Agent does not find the right tool from tools?

dosubot[bot] commented 1 week ago

Hey there, @nanyoullm! I'm here to help you tackle bugs, answer questions, and become a contributor. Let's dive in and tackle this challenge together! 🤖

To configure the ReActAgent to leverage the inference capabilities of the large model itself when the agent does not have the necessary tools, you can initialize the ReActAgent without specifying any tools or tool retriever. This will ensure that the agent relies solely on the large model for inference.

Here is an example of how to configure the ReActAgent:

from llama_index.llms.llama_api import LlamaAPI
from llama_index.memory.chat_memory_buffer import ChatMemoryBuffer
from llama_index.agent.react.base import ReActAgent

# Initialize the LLM
llm = LlamaAPI(model='llama3-70b', api_key='LL-5k8J6fjCyrIXrlWruKbr34Ky0npKy64U3bTZbHtxwELI5i8nTLau2Kp5MU6AopSG')

# Initialize the memory
memory = ChatMemoryBuffer.from_defaults(chat_history=[], llm=llm)

# Create the ReActAgent without tools
react_agent = ReActAgent(
    tools=[],  # No tools specified
    llm=llm,
    memory=memory,
    max_iterations=10,
    verbose=True
)

# Now you can use the react_agent for inference
response = react_agent.chat("What is 20+(2*4)? Calculate step by step")
print(response.response)

In this configuration, the ReActAgent will not have any tools to use and will rely on the large model (llm) for all inference tasks. The ReActAgent class is designed to handle scenarios where no tools or tool retrievers are provided, defaulting to using the large model for reasoning and response generation [1][2][3][4][5].

To continue talking to Dosu, mention @dosu.