run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
34.72k stars 4.9k forks source link

[Question]: How to enable "Calling function" print out after querying from Multi-Document Agent example #14776

Open gavinslim opened 1 month ago

gavinslim commented 1 month ago

Question Validation

Question

I'm following the wikipedia multi-document agents example from: https://docs.llamaindex.ai/en/stable/examples/agent/multi_document_agents/. Instead of using OpenAI for the LLM, I am using AzureOpenAI.

In the wikipedia example, when a query is sent, "Calling function" descriptions are printed out along with the output to show which tool was called. However, when I perform the query with the top agent, I don't see this additional verbosity. Please see below for what I'm observing:

Enter your query ['q' to exit]: tell me about the arts and culture of Houston
Houston, Texas, is a vibrant city with a rich arts and culture scene that reflects its diverse population. The city is home to a dynamic mix of visual and performing arts, with numerous museums, galleries, theaters, and cultural institutions.

**Museums and Galleries:**
- **The Museum District:** This area is home to 19 museums, including the Houston Museum of Natural Science, the Museum of Fine Arts, Houston (MFAH), and the Contemporary Arts Museum Houston (CAMH). The MFAH boasts an impressive collection of artwork from around the world, while CAMH focuses on modern and contemporary art.
- **The Menil Collection:** A museum that houses the private art collection of philanthropists John and Dominique de Menil. The collection includes a wide range of art from antiquity to the 20th century, with a particular focus on Surrealism.

**Performing Arts:**
- **Theater District:** Houston's Theater District is one of the most prominent in the country, with nearly 13,000 seats in a concentrated downtown area. It features nine major performing arts organizations, including:
  - **The Houston Grand Opera:** Known for its innovative and award-winning productions.
  - **The Houston Symphony:** Offers a full range of classical programming.
  - **The Houston Ballet:** Recognized as one of the top ballet companies in the United States.
- **Alley Theatre:** One of the oldest professional theater companies in Texas, known for its rich repertoire and productions.

**Music and Festivals:**
- Houston has a thriving music scene that spans genres from classical to country, jazz to hip-hop, and everything in between. The city hosts numerous music festivals throughout the year, including the Houston Livestock Show and Rodeo, which features big-name concerts.
- The Bayou City Art Festival is another major event that showcases the works of hundreds of artists and attracts visitors from all over.

**Cultural Institutions:**
- **Asia Society Texas Center:** Offers a wide range of programs in the arts, culture, business, and policy related to Asia.
- **The African American Library at the Gregory School:** Houston's first colored public school, now a library and a repository for African American history and culture.

**Culinary Arts:**
- Houston's culinary scene is also a reflection of its cultural diversity, with a vast array of international cuisines and a strong tradition of barbecue and Tex-Mex.

**Public Art and Architecture:**
- The city has an active public art program, with numerous installations and sculptures throughout the city. Notable architecture includes the Rothko Chapel and the James Turrell Skyspace.

Houston's arts and culture are dynamic and ever-evolving, with a strong community support system that includes numerous non-profit organizations and patrons dedicated to fostering the arts. 

How do I display additional details on the tools that was using by the top agent? Is it because I'm not using OpenAI?

Thanks

dosubot[bot] commented 1 month ago

Hello @gavinslim! I'm here to help you with any bugs, questions, or contributions you might have. Let's work together to get things sorted out!

To display additional details on the tools being used by the top agent, you can use the verbose=True parameter when creating the agent. This will provide more detailed output about the tools being used.

Here is an example of how to set this up:

from llama_index.core.agent import ReActAgent
from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.core.tools import FunctionTool
from llama_index.tools.yahoo_finance import YahooFinanceToolSpec

# Define your tools
def multiply(a: float, b: float) -> float:
    """Multiply two numbers and returns the product"""
    return a * b

multiply_tool = FunctionTool.from_defaults(fn=multiply)

def add(a: float, b: float) -> float:
    """Add two numbers and returns the sum"""
    return a + b

add_tool = FunctionTool.from_defaults(fn=add)

# Fetch tools from LlamaHub
finance_tools = YahooFinanceToolSpec().to_tool_list()
finance_tools.extend([multiply_tool, add_tool])

# Create the agent with verbose=True
agent = ReActAgent.from_tools(finance_tools, verbose=True)

response = agent.chat("What is the current price of NVDA?")
print(response)

The verbose=True parameter will ensure that additional details about the tools are displayed during the agent's operation. This is not dependent on whether you are using OpenAI or AzureOpenAI [1][2][3].

To continue talking to Dosu, mention @dosu.

gavinslim commented 1 month ago

Thanks for the reply. I have the verbose parameter set to True when creating the top agent, along with the agent for each document. See the code below. Is there anything else I should consider?

    # Create top agent
    top_agent = OpenAIAgent.from_tools(
        tool_retriever=obj_index.as_retriever(similarity_top_k=3),
        system_prompt=""" \
            You are an agent designed to answer queries about a set of given cities.
            Please always use the tools provided to answer a question. Do not rely on prior knowledge.\
        """,
        verbose=True,
    )
dosubot[bot] commented 1 month ago

To ensure that the additional details on the tools being used by the top agent are displayed, you should consider the following:

  1. Tool Name and Description: Ensure the tool name and description are well-defined and informative. This helps the LLM understand the purpose and usage of the tool better.

    tool = FunctionTool.from_defaults(
        get_weather,
        name="WeatherTool",
        description="A tool for getting the weather for a given location."
    )
  2. Verbose Mode: Enable verbose mode when creating the agent, which you have already done.

    agent = ReActAgent.from_tools(tools, llm=llm, verbose=True)
  3. Return Direct Option: Consider using the return_direct option if you want the response from the tool to be returned directly without being interpreted and rewritten by the agent.

    tool = QueryEngineTool.from_defaults(
        query_engine,
        name="QueryEngineTool",
        description="A tool for querying the engine.",
        return_direct=True,
    )
    
    agent = OpenAIAgent.from_tools([tool])
  4. Debugging Tools: Use debugging tools to inspect the tool schema and ensure it is correctly defined.

    schema = tool.metadata.get_parameters_dict()
    print(schema)
  5. Using Existing Tools from LlamaHub: Leverage existing tools from LlamaHub to expand the agent's capabilities. For example, you can use the Yahoo Finance tool to fetch financial data.

    from llama_index.tools.yahoo_finance import YahooFinanceToolSpec
    
    finance_tools = YahooFinanceToolSpec().to_tool_list()
    finance_tools.extend([multiply_tool, add_tool])
    
    agent = ReActAgent.from_tools(finance_tools, verbose=True)
    response = agent.chat("What is the current price of NVDA?")
    print(response)
  6. Forcing Tool Usage: If you want the agent to always use a specific tool, you can pass only that tool to the agent. This ensures that the agent uses the specified tool for its responses.

    tools.append(Tool(
        name="Search",
        func=search.run,
        description="Always use this tool first!",
        return_direct=True
    ))
    
    agent_chain = initialize_agent(tools, llm, agent="conversational-react-description", memory=memory, verbose=True, return_intermediate_steps=True)

By considering these aspects, you can ensure that the additional details on the tools being used by the top agent are displayed effectively [1][2][3].

To continue talking to Dosu, mention @dosu.

gavinslim commented 1 month ago

@dosu, I'm encountering another issue. It seems like my tools are missing to_openai_tool

    response = top_agent.query("Please explain the files you have access to")
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
    result = func(*args, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\base\base_query_engine.py", line 53, in query
    query_result = self._query(str_or_query_bundle)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\callbacks\utils.py", line 41, in wrapper
    return func(self, *args, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\base\agent\types.py", line 43, in _query
    agent_response = self.chat(
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
    result = func(*args, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\callbacks\utils.py", line 41, in wrapper
    return func(self, *args, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\agent\runner\base.py", line 644, in chat
    chat_response = self._chat(
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
    result = func(*args, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\agent\runner\base.py", line 574, in _chat
    cur_step_output = self._run_step(
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
    result = func(*args, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\agent\runner\base.py", line 411, in _run_step
    cur_step_output = self.agent_worker.run_step(step, task, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\core\callbacks\utils.py", line 41, in wrapper
    return func(self, *args, **kwargs)
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\agent\openai\step.py", line 728, in run_step
    return self._run_step(
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\agent\openai\step.py", line 567, in _run_step
    openai_tools = [tool.metadata.to_openai_tool() for tool in tools]
  File "C:\Users\gavinlim\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\agent\openai\step.py", line 567, in <listcomp>
    openai_tools = [tool.metadata.to_openai_tool() for tool in tools]
AttributeError: 'dict' object has no attribute 'to_openai_tool'
dosubot[bot] commented 1 month ago

To resolve the issue with your tools missing the to_openai_tool attribute, ensure that the ToolMetadata class includes the to_openai_tool method. Here is the relevant part of the code:

@dataclass
class ToolMetadata:
    description: str
    name: Optional[str] = None
    fn_schema: Optional[Type[BaseModel]] = DefaultToolFnSchema

    def get_parameters_dict(self) -> dict:
        if self.fn_schema is None:
            parameters = {
                "type": "object",
                "properties": {
                    "input": {"title": "input query string", "type": "string"},
                },
                "required": ["input"],
            }
        else:
            parameters = self.fn_schema.schema()
            parameters are {
                k: v
                for k, v in parameters.items()
                if k in ["type", "properties", "required", "definitions"]
            }
        return parameters

    @property
    def fn_schema_str(self) -> str:
        """Get fn schema as string."""
        if self.fn_schema is None:
            raise ValueError("fn_schema is None.")
        parameters = self.get_parameters_dict()
        return json.dumps(parameters)

    def get_name(self) -> str:
        """Get name."""
        if self.name is None:
            raise ValueError("name is None.")
        return self.name

    @deprecated(
        "Deprecated in favor of `to_openai_tool`, which should be used instead."
    )
    def to_openai_function(self) -> Dict[str, Any]:
        """Deprecated and replaced by `to_openai_tool`.
        The name and arguments of a function that should be called, as generated by the
        model.
        """
        return {
            "name": self.name,
            "description": self.description,
            "parameters": self.get_parameters_dict(),
        }

    def to_openai_tool(self) -> Dict[str, Any]:
        """To OpenAI tool."""
        return {
            "type": "function",
            "function": {
                "name": self.name,
                "description": self.description,
                "parameters": self.get_parameters_dict(),
            },
        }

Ensure that your ToolMetadata class includes the to_openai_tool method as shown above. If your tools are instances of ToolMetadata, they should have the to_openai_tool attribute available [1][2][3].

To continue talking to Dosu, mention @dosu.

gavinslim commented 1 month ago

@dosu, My tools are not an object of ToolMetadata class. In step.py module, the returned tools from self.get_tools(task.input) returns a list of tools, each tool has a dict attribute called metadata that only has the name of the tool.

    def _run_step(
        self,
        step: TaskStep,
        task: Task,
        mode: ChatResponseMode = ChatResponseMode.WAIT,
        tool_choice: Union[str, dict] = "auto",
    ) -> TaskStepOutput:
        """Run step."""
        if step.input is not None:
            add_user_step_to_memory(
                step, task.extra_state["new_memory"], verbose=self._verbose
            )
        # TODO: see if we want to do step-based inputs
        # print(f'task.input: {task.input}')
        tools = self.get_tools(task.input)
        # print(f'Tools: {tools}')
        # for tool in tools:
            # print(f"DEBUG: {tool.metadata}")
        # exit()
        openai_tools = [tool.metadata.to_openai_tool() for tool in tools]