Open raoofnaushad opened 4 months ago
To resolve the KeyError: 'multi_tool_use.parallel'
when using StructuredPlannerAgent
with FunctionCallingAgentWorker
, ensure that the allow_parallel_tool_calls
parameter is correctly set. The FunctionCallingAgentWorker
class has an allow_parallel_tool_calls
attribute that defaults to True
.
Here is an example of how to initialize the FunctionCallingAgentWorker
with the allow_parallel_tool_calls
parameter:
from llama_index.core.tools import FunctionTool
from llama_index.core.agent import (
StructuredPlannerAgent,
FunctionCallingAgentWorker,
)
from llama_index.core.llms.function_calling import FunctionCallingLLM
from llama_index.core.base.llms.types import ChatMessage
# Example tools initialization
get_client_comparison_tool = FunctionTool.from_defaults(fn=get_client_comparison)
get_compared_to_peer_group_tool = FunctionTool.from_defaults(fn=get_compared_to_peer_group)
# Initialize the FunctionCallingLLM
llm = FunctionCallingLLM(...) # Initialize your LLM here
prefix_messages = [ChatMessage(content="System message", role="system")]
# Initialize the FunctionCallingAgentWorker with allow_parallel_tool_calls set to True or False
worker = FunctionCallingAgentWorker(
tools=[get_client_comparison_tool, get_compared_to_peer_group_tool],
llm=llm,
prefix_messages=prefix_messages,
allow_parallel_tool_calls=True # or False, depending on your requirement
)
# Wrap the worker in the top-level planner
agent = StructuredPlannerAgent(
agent_worker=worker,
tools=[get_client_comparison_tool, get_compared_to_peer_group_tool],
verbose=True
)
import nest_asyncio
nest_asyncio.apply()
response = agent.chat(
f"You are now AI assistant for RBC Mobile Banking app for a given client to understand comparative spending & budgeting suggestions. "
"Based on the insights below can you come up with a 'Budget Suggestion follow-up action' that helps the client to decide on how should he budget based on his peer group & his own transactions"
)
Ensure that the allow_parallel_tool_calls
parameter is set according to your needs to avoid the KeyError
. Additionally, make sure that the tools and LLM are properly initialized and passed to the FunctionCallingAgentWorker
and StructuredPlannerAgent
[1][2][3].
To continue talking to Dosu, mention @dosu.
@dosu
I changed my code like this:
from llama_index.llms.openai import OpenAI
Settings.llm = OpenAI(
model=C.LLM_MODEL, temperature=0, max_tokens=4000, api_base=os.environ.get('GW_BASE_URL'), api_key=get_api_key()
)
from llama_index.core.tools import FunctionTool
get_client_comparison_tool = FunctionTool.from_defaults(fn=get_client_comparison)
get_compared_to_peer_group_tool = FunctionTool.from_defaults(fn=get_compared_to_peer_group)
from llama_index.core.tools import FunctionTool
from llama_index.core.agent import (
StructuredPlannerAgent,
FunctionCallingAgentWorker,
)
# create the function calling worker for reasoning
worker = FunctionCallingAgentWorker.from_tools(
tools=[get_client_comparison_tool, get_compared_to_peer_group_tool],
verbose=True,
allow_parallel_tool_calls=True
)
# wrap the worker in the top-level planner
agent = StructuredPlannerAgent(
worker, tools=[get_client_comparison_tool, get_compared_to_peer_group_tool], verbose=True
)
import nest_asyncio
nest_asyncio.apply()
response = agent.chat(
f"You are now AI assistant for RBC Mobile Banking app for a given client to understand comparitive spending & budgeting suggestions. "
"Based on the insights below can you come up with a 'Budget Suggestion follow-up action' that helps the client to decide on how should he budget based on his peer group & his own transactions"
)
Again getting the error. Added traceback also here.
--> 341 result = program(**prompt_args)
342 dispatcher.event(LLMStructuredPredictEndEvent(output=result))
343 return result
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py:230, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
226 self.span_enter(
227 id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
228 )
229 try:
--> 230 result = func(*args, **kwargs)
231 except BaseException as e:
232 self.event(SpanDropEvent(span_id=id_, err_str=str(e)))
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/program/function_program.py:138, in FunctionCallingProgram.__call__(self, llm_kwargs, *args, **kwargs)
135 messages = self._prompt.format_messages(llm=self._llm, **kwargs)
136 messages = self._llm._extend_messages(messages)
--> 138 agent_response = self._llm.predict_and_call(
139 [tool],
140 chat_history=messages,
141 verbose=self._verbose,
142 allow_parallel_tool_calls=self._allow_parallel_tool_calls,
143 **llm_kwargs,
144 )
145 return _parse_tool_outputs(
146 agent_response,
147 allow_parallel_tool_calls=self._allow_parallel_tool_calls,
148 )
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py:230, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
226 self.span_enter(
227 id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
228 )
229 try:
--> 230 result = func(*args, **kwargs)
231 except BaseException as e:
232 self.event(SpanDropEvent(span_id=id_, err_str=str(e)))
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/llms/function_calling.py:101, in FunctionCallingLLM.predict_and_call(self, tools, user_msg, chat_history, verbose, allow_parallel_tool_calls, **kwargs)
92 response = self.chat_with_tools(
93 tools,
94 user_msg,
(...)
98 **kwargs,
99 )
100 tool_calls = self.get_tool_calls_from_response(response)
--> 101 tool_outputs = [
102 call_tool_with_selection(tool_call, tools, verbose=verbose)
103 for tool_call in tool_calls
104 ]
105 if allow_parallel_tool_calls:
106 output_text = "\n\n".join(
107 [tool_output.content for tool_output in tool_outputs]
108 )
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/llms/function_calling.py:102, in <listcomp>(.0)
92 response = self.chat_with_tools(
93 tools,
94 user_msg,
(...)
98 **kwargs,
99 )
100 tool_calls = self.get_tool_calls_from_response(response)
101 tool_outputs = [
--> 102 call_tool_with_selection(tool_call, tools, verbose=verbose)
103 for tool_call in tool_calls
104 ]
105 if allow_parallel_tool_calls:
106 output_text = "\n\n".join(
107 [tool_output.content for tool_output in tool_outputs]
108 )
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/tools/calling.py:74, in call_tool_with_selection(tool_call, tools, verbose)
72 print("=== Calling Function ===")
73 print(f"Calling function: {name} with args: {arguments_str}")
---> 74 tool = tools_by_name[name]
75 output = call_tool(tool, tool_call.tool_kwargs)
77 if verbose:
KeyError: 'multi_tool_use.parallel'
The KeyError: 'multi_tool_use.parallel'
indicates that the tool name 'multi_tool_use.parallel'
is not found in the tools_by_name
dictionary. This dictionary is created by mapping tool names to their corresponding tool objects.
To resolve this issue, ensure that the tools you are using are correctly initialized and that their names match the expected names in the tools_by_name
dictionary. Here is an updated version of your code with additional checks and initialization:
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from llama_index.core.agent import StructuredPlannerAgent, FunctionCallingAgentWorker
from llama_index.core.llms.function_calling import FunctionCallingLLM
from llama_index.core.base.llms.types import ChatMessage
import nest_asyncio
import os
# Initialize the LLM
llm = OpenAI(
model="your_model_name", # Replace with your model name
temperature=0,
max_tokens=4000,
api_base=os.environ.get('GW_BASE_URL'),
api_key="your_api_key" # Replace with your API key
)
# Define your functions
def get_client_comparison():
# Your function implementation
pass
def get_compared_to_peer_group():
# Your function implementation
pass
# Initialize the tools
get_client_comparison_tool = FunctionTool.from_defaults(fn=get_client_comparison)
get_compared_to_peer_group_tool = FunctionTool.from_defaults(fn=get_compared_to_peer_group)
# Initialize the FunctionCallingAgentWorker with allow_parallel_tool_calls set to True
worker = FunctionCallingAgentWorker(
tools=[get_client_comparison_tool, get_compared_to_peer_group_tool],
llm=llm,
prefix_messages=[ChatMessage(content="System message", role="system")],
verbose=True,
allow_parallel_tool_calls=True
)
# Wrap the worker in the top-level planner
agent = StructuredPlannerAgent(
agent_worker=worker,
tools=[get_client_comparison_tool, get_compared_to_peer_group_tool],
verbose=True
)
# Apply nest_asyncio
nest_asyncio.apply()
# Run the agent
response = agent.chat(
"You are now AI assistant for RBC Mobile Banking app for a given client to understand comparative spending & budgeting suggestions. "
"Based on the insights below can you come up with a 'Budget Suggestion follow-up action' that helps the client to decide on how should he budget based on his peer group & his own transactions"
)
print(response)
Ensure that:
FunctionCallingAgentWorker
is properly configured with the allow_parallel_tool_calls
parameter.FunctionCallingAgentWorker
and StructuredPlannerAgent
.If the issue persists, verify that the tool names used in your function calls match the names in the tools_by_name
dictionary [1].
To continue talking to Dosu, mention @dosu.
@dosu
I am still getting this error at times. Again the problem here is: I am trying to use the StructuredPlannerAgent with FunctionCallingAgentWorker. I am getting an error at times that says KeyError: 'multi_tool_use.parallel'
This is my code:
from llama_index.core.tools import FunctionTool
financial_summary_tool = FunctionTool.from_defaults(fn=get_financial_summary)
daily_balances_tool = FunctionTool.from_defaults(fn=get_daily_balances)
spending_conditions_tool = FunctionTool.from_defaults(fn=get_spending_conditions)
client_spending_tool = FunctionTool.from_defaults(fn=get_client_spending_for_current_month)
from llama_index.core.tools import FunctionTool
from llama_index.core.base.llms.types import ChatMessage
from llama_index.core.agent import (
StructuredPlannerAgent,
FunctionCallingAgentWorker,
)
# Initialize the FunctionCallingAgentWorker with allow_parallel_tool_calls set to True
worker = FunctionCallingAgentWorker(
tools=[financial_summary_tool, daily_balances_tool, spending_conditions_tool, client_spending_tool],
llm=llm,
prefix_messages=[ChatMessage(content="You are now AI assistant for a Banking app for a given client to understand financial overview. ", role="system")],
verbose=True,
allow_parallel_tool_calls=True
)
# Wrap the worker in the top-level planner
agent = StructuredPlannerAgent(
agent_worker=worker,
tools=[financial_summary_tool, daily_balances_tool, spending_conditions_tool, client_spending_tool],
verbose=True
)
initiator_prompt = """
You are an AI assistant for a banking application. Your purpose is to help clients understand their financial overview and provide personalized advice for financial well-being.
Based on your analysis, create 'What If' scenarios that could improve the client's financial situation. Consider both short-term actions and their long-term impacts.
Develop personalized ideas for financial well-being. These should be tailored to the client's specific financial situation and goals.
For each suggestion, demonstrate how it would affect the client's finances in the upcoming month. Use the tools to calculate and show the potential impact.
Additionally, showcase the longer-term impact of these actions over a 6-month and 1-year period.
"""
whatif_insight = agent.chat(initiator_prompt)
The error:
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/tools/calling.py:74, in call_tool_with_selection(tool_call, tools, verbose)
72 print("=== Calling Function ===")
73 print(f"Calling function: {name} with args: {arguments_str}")
---> 74 tool = tools_by_name[name]
75 output = call_tool(tool, tool_call.tool_kwargs)
77 if verbose:
KeyError: 'multi_tool_use.parallel'
The KeyError: 'multi_tool_use.parallel'
occurs because the tool name 'multi_tool_use.parallel'
is not found in the tools_by_name
dictionary. This issue can arise if the tool name generated during the function call does not match any of the tool names in the tools
list.
To resolve this issue, you can add a check to ensure that the tool_call.tool_name
exists in the tools_by_name
dictionary before attempting to access it. Here is an updated version of the call_tool_with_selection
function with this check:
def call_tool_with_selection(
tool_call: ToolSelection,
tools: List["BaseTool"],
verbose: bool = False,
) -> ToolOutput:
from llama_index.core.tools.calling import call_tool
tools_by_name = {tool.metadata.name: tool for tool in tools}
name = tool_call.tool_name
if verbose:
arguments_str = json.dumps(tool_call.tool_kwargs)
print("=== Calling Function ===")
print(f"Calling function: {name} with args: {arguments_str}")
if name not in tools_by_name:
return ToolOutput(
content=f"Error: Tool '{name}' not found.",
tool_name=name,
raw_input=tool_call.tool_kwargs,
raw_output="",
is_error=True,
)
tool = tools_by_name[name]
output = call_tool(tool, tool_call.tool_kwargs)
if verbose:
print("=== Function Output ===")
print(output.content)
return output
This check ensures that if the tool name is not found in the dictionary, an appropriate error message is returned instead of raising a KeyError
[1].
Additionally, ensure that the allow_parallel_tool_calls
parameter is correctly set and that the tools are properly initialized and passed to the FunctionCallingAgentWorker
and StructuredPlannerAgent
. Here is your updated code with these considerations:
from llama_index.core.tools import FunctionTool
from llama_index.core.base.llms.types import ChatMessage
from llama_index.core.agent import StructuredPlannerAgent, FunctionCallingAgentWorker
from llama_index.llms.openai import OpenAI
import nest_asyncio
import os
# Initialize the LLM
llm = OpenAI(
model="your_model_name", # Replace with your model name
temperature=0,
max_tokens=4000,
api_base=os.environ.get('GW_BASE_URL'),
api_key="your_api_key" # Replace with your API key
)
# Define your functions
def get_financial_summary():
# Your function implementation
pass
def get_daily_balances():
# Your function implementation
pass
def get_spending_conditions():
# Your function implementation
pass
def get_client_spending_for_current_month():
# Your function implementation
pass
# Initialize the tools
financial_summary_tool = FunctionTool.from_defaults(fn=get_financial_summary)
daily_balances_tool = FunctionTool.from_defaults(fn=get_daily_balances)
spending_conditions_tool = FunctionTool.from_defaults(fn=get_spending_conditions)
client_spending_tool = FunctionTool.from_defaults(fn=get_client_spending_for_current_month)
# Initialize the FunctionCallingAgentWorker with allow_parallel_tool_calls set to True
worker = FunctionCallingAgentWorker(
tools=[financial_summary_tool, daily_balances_tool, spending_conditions_tool, client_spending_tool],
llm=llm,
prefix_messages=[ChatMessage(content="You are now AI assistant for a Banking app for a given client to understand financial overview.", role="system")],
verbose=True,
allow_parallel_tool_calls=True
)
# Wrap the worker in the top-level planner
agent = StructuredPlannerAgent(
agent_worker=worker,
tools=[financial_summary_tool, daily_balances_tool, spending_conditions_tool, client_spending_tool],
verbose=True
)
# Apply nest_asyncio
nest_asyncio.apply()
# Run the agent
initiator_prompt = """
You are an AI assistant for a banking application. Your purpose is to help clients understand their financial overview and provide personalized advice for financial well-being.
Based on your analysis, create 'What If' scenarios that could improve the client's financial situation. Consider both short-term actions and their long-term impacts.
Develop personalized ideas for financial well-being. These should be tailored to the client's specific financial situation and goals.
For each suggestion, demonstrate how it would affect the client's finances in the upcoming month. Use the tools to calculate and show the potential impact.
Additionally, showcase the longer-term impact of these actions over a 6-month and 1-year period.
"""
whatif_insight = agent.chat(initiator_prompt)
print(whatif_insight)
This should help mitigate the KeyError
by ensuring that the tool names are correctly checked and handled [2].
To continue talking to Dosu, mention @dosu.
@dosu
That gives me new error:
File ~/Documents/amplify/hermes/utilities/research_rag/venv/lib/python3.9/site-packages/llama_index/core/agent/runner/planner.py:258, in StructuredPlannerAgent.create_plan(self, input, **kwargs)
256 if self.verbose:
257 print(f"=== Initial plan ===")
--> 258 for sub_task in plan.sub_tasks:
259 print(
260 f"{sub_task.name}:\n{sub_task.input} -> {sub_task.expected_output}\ndeps: {sub_task.dependencies}\n\n"
261 )
263 plan_id = str(uuid.uuid4())
AttributeError: 'str' object has no attribute 'sub_tasks'
To resolve the AttributeError: 'str' object has no attribute 'sub_tasks'
when using the StructuredPlannerAgent
, ensure that the initial_plan_prompt
and plan_refine_prompt
are correctly initialized as PromptTemplate
objects if they are provided as strings. The error likely occurs because a string is being used where a Plan
object is expected.
Here is the relevant part of the StructuredPlannerAgent
initialization that handles this:
def __init__(
self,
agent_worker: BaseAgentWorker,
tools: Optional[List[BaseTool]] = None,
tool_retriever: Optional[ObjectRetriever[BaseTool]] = None,
chat_history: Optional[List[ChatMessage]] = None,
state: Optional[PlannerAgentState] = None,
memory: Optional[BaseMemory] = None,
llm: Optional[LLM] = None,
initial_plan_prompt: Union[str, PromptTemplate] = DEFAULT_INITIAL_PLAN_PROMPT,
plan_refine_prompt: Union[str, PromptTemplate] = DEFAULT_PLAN_REFINE_PROMPT,
callback_manager: Optional[CallbackManager] = None,
init_task_state_kwargs: Optional[dict] = None,
delete_task_on_finish: bool = False,
default_tool_choice: str = "auto",
verbose: bool = False,
) -> None:
"""Initialize."""
self.agent_worker = agent_worker
self.state = state or PlannerAgentState()
self.memory = memory or ChatMemoryBuffer.from_defaults(chat_history, llm=llm)
self.tools = tools
self.tool_retriever = tool_retriever
self.llm = llm or Settings.llm
if isinstance(initial_plan_prompt, str):
initial_plan_prompt = PromptTemplate(initial_plan_prompt)
self.initial_plan_prompt = initial_plan_prompt
if isinstance(plan_refine_prompt, str):
plan_refine_prompt = PromptTemplate(plan_refine_prompt)
self.plan_refine_prompt = plan_refine_prompt
# get and set callback manager
if callback_manager is not None:
self.agent_worker.set_callback_manager(callback_manager)
self.callback_manager = callback_manager
else:
if hasattr(self.agent_worker, "callback_manager"):
self.callback_manager = (
self.agent_worker.callback_manager or CallbackManager()
)
else:
self.callback_manager = Settings.callback_manager
self.init_task_state_kwargs = init_task_state_kwargs or {}
self.delete_task_on_finish = delete_task_on_finish
self.default_tool_choice = default_tool_choice
self.verbose = verbose
Ensure that initial_plan_prompt
and plan_refine_prompt
are correctly converted to PromptTemplate
objects if they are provided as strings. This should resolve the AttributeError
.
Additionally, make sure that the StructuredPlannerAgent
is correctly decomposing the initial input into sub-tasks. Each sub-task should be represented by an input, expected outcome, and any dependent sub-tasks that should be completed first. Here is an example of how to create and use the StructuredPlannerAgent
:
from llama_index.core.agent import (
StructuredPlannerAgent,
FunctionCallingAgentWorker,
ReActAgentWorker,
)
# create the function calling worker for reasoning
worker = FunctionCallingAgentWorker.from_tools(
[financial_summary_tool, daily_balances_tool, spending_conditions_tool, client_spending_tool], verbose=True
)
# wrap the worker in the top-level planner
agent = StructuredPlannerAgent(
worker, tools=[financial_summary_tool, daily_balances_tool, spending_conditions_tool, client_spending_tool], verbose=True
)
# Give the agent a complex task
response = agent.chat(
"You are an AI assistant for a banking application. Your purpose is to help clients understand their financial overview and provide personalized advice for financial well-being. Based on your analysis, create 'What If' scenarios that could improve the client's financial situation. Consider both short-term actions and their long-term impacts. Develop personalized ideas for financial well-being. These should be tailored to the client's specific financial situation and goals. For each suggestion, demonstrate how it would affect the client's finances in the upcoming month. Use the tools to calculate and show the potential impact. Additionally, showcase the longer-term impact of these actions over a 6-month and 1-year period."
)
print(str(response))
By ensuring the prompts are correctly initialized and the agent is properly decomposing tasks, you should be able to avoid the AttributeError
[1][2][3].
To continue talking to Dosu, mention @dosu.
@raoofnaushad the LLM in this case is hallucinating a tool name
I'm pretty sure recent versions of llama-index catch this error, and try to re-prompt the LLM so that it can retry
Question Validation
Question
I was trying to use StructuredPlannerAgent with FunctionCallingAgentWorker. The code is as below. However, each time I run I am getting an error that says: KeyError: 'multi_tool_use.parallel
Code:
Error: