langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.5k stars 14.8k forks source link

Trouble using the Conversational Agent: Missing some input keys: {'input'} and Agent.plan() got multiple values for argument 'intermediate_steps' #12521

Closed dianz17 closed 6 months ago

dianz17 commented 10 months ago

Issue you'd like to raise.

Im trying to use an agent as a router for two retrievals. I also want it to have memory so im using the conversational agent with two RetrievalQA chains and ConversationBufferWindowMemory.

Heres the code after creating the retrievals and chatTemplates:

chat = ChatVertexAI(
    model_name = "chat-bison-32k",
    temperature = 0.5,
    top_p = 0.8,
    top_k = 40,
    max_output_tokens = 300
)

prompt1 = PromptTemplate(
    input_variables=["context", "input"],
    template=chatTemplate1
)
chain_type_kwargs1 = {"prompt": prompt1}

prompt2 = PromptTemplate(
    input_variables=["context", "input"],
    template=chatTemplate2
)
chain_type_kwargs2 = {"prompt": prompt2}

faq = RetrievalQA.from_chain_type(
    llm = llm,
    retriever = chroma_retriever,
    chain_type="stuff",
    chain_type_kwargs=chain_type_kwargs1
)

complex_products = RetrievalQA.from_chain_type(
    llm = llm,
    retriever = ensemble_retriever,
    chain_type="stuff",
    chain_type_kwargs=chain_type_kwargs2
)

tools = [
    Tool(
        name="FAQ about the store",
        func= faq.run,
        description="useful for when you need to answer questions about related to  frequently asked questions. This tool provides information on business hours and store location. To use, input a well-formulated question.",
    ),

    Tool(
        name="Product information",
        func=complex_products.run,
        description="useful for when you need to answer questions about certain products that the costumer needs. Input should be a fully formed question.",
    ),
]

memory = ConversationBufferWindowMemory(memory_key="chat_history",input_key="input",output_key='output', return_messages=True, k=4)

Now, I've been trying two things. The first one is to call my agent with the memory directly. I tried it with and without using .run():

agent = initialize_agent(
    tools = tools,
    llm = chat,
    agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    verbose=True,
    handle_parsing_errors=True,
    max_iterations = 3, 
    early_stopping_method = 'generate',
    memory = memory

)

result = agent("What are the closing hours?")
result["output"]

but with some questions, not all, i get the following error:

> Entering new AgentExecutor chain...
 Thought: Do I need to use a tool? Yes
Action: FAQ about the store
Action Input: What are the closing hours?
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[45], line 1
----> 1 result = agent("What are the closing hours?")
      2 result["output"]

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)
   1144 # We now enter the agent loop (until it returns something).
   1145 while self._should_continue(iterations, time_elapsed):
-> 1146     next_step_output = self._take_next_step(
   1147         name_to_tool_map,
   1148         color_mapping,
   1149         inputs,
   1150         intermediate_steps,
   1151         run_manager=run_manager,
   1152     )
   1153     if isinstance(next_step_output, AgentFinish):
   1154         return self._return(
   1155             next_step_output, intermediate_steps, run_manager=run_manager
   1156         )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:996, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    994         tool_run_kwargs["llm_prefix"] = ""
    995     # We then call the tool on the tool input to get an observation
--> 996     observation = tool.run(
    997         agent_action.tool_input,
    998         verbose=self.verbose,
    999         color=color,
   1000         callbacks=run_manager.get_child() if run_manager else None,
   1001         **tool_run_kwargs,
   1002     )
   1003 else:
   1004     tool_run_kwargs = self.agent.tool_run_logging_kwargs()

File /opt/conda/lib/python3.10/site-packages/langchain/tools/base.py:365, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    363 except (Exception, KeyboardInterrupt) as e:
    364     run_manager.on_tool_error(e)
--> 365     raise e
    366 else:
    367     run_manager.on_tool_end(
    368         str(observation), color=color, name=self.name, **kwargs
    369     )

File /opt/conda/lib/python3.10/site-packages/langchain/tools/base.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    334 try:
    335     tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
    336     observation = (
--> 337         self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
    338         if new_arg_supported
    339         else self._run(*tool_args, **tool_kwargs)
    340     )
    341 except ToolException as e:
    342     if not self.handle_tool_error:

File /opt/conda/lib/python3.10/site-packages/langchain/tools/base.py:510, in Tool._run(self, run_manager, *args, **kwargs)
    507 if self.func:
    508     new_argument_supported = signature(self.func).parameters.get("callbacks")
    509     return (
--> 510         self.func(
    511             *args,
    512             callbacks=run_manager.get_child() if run_manager else None,
    513             **kwargs,
    514         )
    515         if new_argument_supported
    516         else self.func(*args, **kwargs)
    517     )
    518 raise NotImplementedError("Tool does not support sync")

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:505, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    503     if len(args) != 1:
    504         raise ValueError("`run` supports only one positional argument.")
--> 505     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    506         _output_key
    507     ]
    509 if kwargs and not args:
    510     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    511         _output_key
    512     ]

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:139, in BaseRetrievalQA._call(self, inputs, run_manager)
    137 else:
    138     docs = self._get_docs(question)  # type: ignore[call-arg]
--> 139 answer = self.combine_documents_chain.run(
    140     input_documents=docs, question=question, callbacks=_run_manager.get_child()
    141 )
    143 if self.return_source_documents:
    144     return {self.output_key: answer, "source_documents": docs}

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:510, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    505     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    506         _output_key
    507     ]
    509 if kwargs and not args:
--> 510     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    511         _output_key
    512     ]
    514 if not kwargs and not args:
    515     raise ValueError(
    516         "`run` supported with either positional arguments or keyword arguments,"
    517         " but none were provided."
    518     )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:286, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    250 def __call__(
    251     self,
    252     inputs: Union[Dict[str, Any], Any],
   (...)
    259     include_run_info: bool = False,
    260 ) -> Dict[str, Any]:
    261     """Execute the chain.
    262 
    263     Args:
   (...)
    284             `Chain.output_keys`.
    285     """
--> 286     inputs = self.prep_inputs(inputs)
    287     callback_manager = CallbackManager.configure(
    288         callbacks,
    289         self.callbacks,
   (...)
    294         self.metadata,
    295     )
    296     new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:443, in Chain.prep_inputs(self, inputs)
    441     external_context = self.memory.load_memory_variables(inputs)
    442     inputs = dict(inputs, **external_context)
--> 443 self._validate_inputs(inputs)
    444 return inputs

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:195, in Chain._validate_inputs(self, inputs)
    193 missing_keys = set(self.input_keys).difference(inputs)
    194 if missing_keys:
--> 195     raise ValueError(f"Missing some input keys: {missing_keys}")

ValueError: Missing some input keys: {'input'}

I also tried calling the agent like this:

chat_history = []
question = "What are the closing hours?"
agent({"input": question, "chat_history": chat_history})

and got the same error.

I tried to use an agent executor, moving the memory from the agent to the executor but I get the following error:

agent_executor = AgentExecutor(agent=agent,
                               tools=tools,
                               memory=memory,
                               verbose=True)

agent_executor.invoke({"input": "What are the closing hours?"})['output']

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[61], line 1
----> 1 agent_executor.invoke({"input": "What are the closing hours?"})['output']

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:87, in Chain.invoke(self, input, config, **kwargs)
     80 def invoke(
     81     self,
     82     input: Dict[str, Any],
     83     config: Optional[RunnableConfig] = None,
     84     **kwargs: Any,
     85 ) -> Dict[str, Any]:
     86     config = config or {}
---> 87     return self(
     88         input,
     89         callbacks=config.get("callbacks"),
     90         tags=config.get("tags"),
     91         metadata=config.get("metadata"),
     92         run_name=config.get("run_name"),
     93         **kwargs,
     94     )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)
   1144 # We now enter the agent loop (until it returns something).
   1145 while self._should_continue(iterations, time_elapsed):
-> 1146     next_step_output = self._take_next_step(
   1147         name_to_tool_map,
   1148         color_mapping,
   1149         inputs,
   1150         intermediate_steps,
   1151         run_manager=run_manager,
   1152     )
   1153     if isinstance(next_step_output, AgentFinish):
   1154         return self._return(
   1155             next_step_output, intermediate_steps, run_manager=run_manager
   1156         )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:933, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    930     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
    932     # Call the LLM to see what to do.
--> 933     output = self.agent.plan(
    934         intermediate_steps,
    935         callbacks=run_manager.get_child() if run_manager else None,
    936         **inputs,
    937     )
    938 except OutputParserException as e:
    939     if isinstance(self.handle_parsing_errors, bool):

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:375, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
    363 """Given input, decided what to do.
    364 
    365 Args:
   (...)
    372     Action specifying what tool to use.
    373 """
    374 inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
--> 375 output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
    376 return output

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:87, in Chain.invoke(self, input, config, **kwargs)
     80 def invoke(
     81     self,
     82     input: Dict[str, Any],
     83     config: Optional[RunnableConfig] = None,
     84     **kwargs: Any,
     85 ) -> Dict[str, Any]:
     86     config = config or {}
---> 87     return self(
     88         input,
     89         callbacks=config.get("callbacks"),
     90         tags=config.get("tags"),
     91         metadata=config.get("metadata"),
     92         run_name=config.get("run_name"),
     93         **kwargs,
     94     )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)
   1144 # We now enter the agent loop (until it returns something).
   1145 while self._should_continue(iterations, time_elapsed):
-> 1146     next_step_output = self._take_next_step(
   1147         name_to_tool_map,
   1148         color_mapping,
   1149         inputs,
   1150         intermediate_steps,
   1151         run_manager=run_manager,
   1152     )
   1153     if isinstance(next_step_output, AgentFinish):
   1154         return self._return(
   1155             next_step_output, intermediate_steps, run_manager=run_manager
   1156         )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:933, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    930     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
    932     # Call the LLM to see what to do.
--> 933     output = self.agent.plan(
    934         intermediate_steps,
    935         callbacks=run_manager.get_child() if run_manager else None,
    936         **inputs,
    937     )
    938 except OutputParserException as e:
    939     if isinstance(self.handle_parsing_errors, bool):

TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'

Im not sure what im doing wrong. @dosu-bot

Suggestion:

dosubot[bot] commented 10 months ago

πŸ€–

Hello @dianz17,

I'm here to assist you with your issue. Let's try to address the problems you're encountering.

Firstly, the error "Missing some input keys: {'input'}" suggests that the necessary 'input' key is not being provided in the dictionary when calling the agent. When calling the agent or agent executor, you need to pass a dictionary with an 'input' key that contains the text you want the agent to process. Here's an example:

result = agent_executor({"input": "hi, im bob"})

Secondly, the error "Agent.plan() got multiple values for argument 'intermediate_steps'" indicates that the 'intermediate_steps' argument is being passed multiple times when calling the agent executor. The 'intermediate_steps' argument should only be passed once when creating the agent executor. Here's an example:

from langchain.agents import AgentExecutor

agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True,
                               return_intermediate_steps=True)

After creating the agent executor, you can call it without passing the 'intermediate_steps' argument:

result = agent_executor({"input": "hi, im bob"})

Regarding the use of ConversationBufferWindowMemory, it's used to keep a list of the interactions of the conversation over time. It only uses the last K interactions, which can be useful for keeping a sliding window of the most recent interactions. Here's an example of how to use it:

from langchain.memory import ConversationBufferWindowMemory

memory = ConversationBufferWindowMemory(k=5)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.load_memory_variables({})

This will return the most recent interaction:

{'history': 'Human: not much you\nAI: not much'}

You can also get the history as a list of messages by setting return_messages=True when initializing ConversationBufferWindowMemory.

If you're still encountering errors, please provide more details about your code and how you're using the agent and agent executor. I hope this helps! If you have any more questions, feel free to ask.

Best, Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dianz17 commented 10 months ago

@dosu-bot I'm calling the agent executor adding the 'input' key just like you said but I still get the same error. I also tried what you just said about 'intermediate_steps' and im still getting the same error:


agent_executor = AgentExecutor(agent=agent,
                               tools=tools,
                               memory=memory,
                               verbose=True, 
                               return_intermediate_steps =True)

result = agent_executor({"input": "What are the closing hours?"})
result["output"]

> Entering new AgentExecutor chain...

> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[65], line 1
----> 1 result = agent_executor({"input": "What are the closing hours?"})
      3 result["output"]

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)
   1144 # We now enter the agent loop (until it returns something).
   1145 while self._should_continue(iterations, time_elapsed):
-> 1146     next_step_output = self._take_next_step(
   1147         name_to_tool_map,
   1148         color_mapping,
   1149         inputs,
   1150         intermediate_steps,
   1151         run_manager=run_manager,
   1152     )
   1153     if isinstance(next_step_output, AgentFinish):
   1154         return self._return(
   1155             next_step_output, intermediate_steps, run_manager=run_manager
   1156         )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:933, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    930     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
    932     # Call the LLM to see what to do.
--> 933     output = self.agent.plan(
    934         intermediate_steps,
    935         callbacks=run_manager.get_child() if run_manager else None,
    936         **inputs,
    937     )
    938 except OutputParserException as e:
    939     if isinstance(self.handle_parsing_errors, bool):

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:375, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
    363 """Given input, decided what to do.
    364 
    365 Args:
   (...)
    372     Action specifying what tool to use.
    373 """
    374 inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
--> 375 output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
    376 return output

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:87, in Chain.invoke(self, input, config, **kwargs)
     80 def invoke(
     81     self,
     82     input: Dict[str, Any],
     83     config: Optional[RunnableConfig] = None,
     84     **kwargs: Any,
     85 ) -> Dict[str, Any]:
     86     config = config or {}
---> 87     return self(
     88         input,
     89         callbacks=config.get("callbacks"),
     90         tags=config.get("tags"),
     91         metadata=config.get("metadata"),
     92         run_name=config.get("run_name"),
     93         **kwargs,
     94     )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)
   1144 # We now enter the agent loop (until it returns something).
   1145 while self._should_continue(iterations, time_elapsed):
-> 1146     next_step_output = self._take_next_step(
   1147         name_to_tool_map,
   1148         color_mapping,
   1149         inputs,
   1150         intermediate_steps,
   1151         run_manager=run_manager,
   1152     )
   1153     if isinstance(next_step_output, AgentFinish):
   1154         return self._return(
   1155             next_step_output, intermediate_steps, run_manager=run_manager
   1156         )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:933, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    930     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
    932     # Call the LLM to see what to do.
--> 933     output = self.agent.plan(
    934         intermediate_steps,
    935         callbacks=run_manager.get_child() if run_manager else None,
    936         **inputs,
    937     )
    938 except OutputParserException as e:
    939     if isinstance(self.handle_parsing_errors, bool):

TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'
dosubot[bot] commented 10 months ago

πŸ€–

Based on the traceback you provided, it seems like the error is originating from the Agent.plan() method. This method is being called with the intermediate_steps argument being passed twice, once explicitly and once within the **inputs dictionary. This is likely causing the TypeError you're encountering.

The intermediate_steps argument is used to keep track of the steps that the Language Learning Model (LLM) has taken up to the current point, along with the observations made at each step. It's a list of tuples, where each tuple consists of an AgentAction and a string. The AgentAction represents the action taken by the agent at a particular step, and the string represents the observation made after that action.

To resolve this issue, you should ensure that 'intermediate_steps' is only passed once. If 'intermediate_steps' is already included in the kwargs dictionary, you do not need to pass it again as a separate argument. Here's how you can modify the 'plan' and 'aplan' methods to avoid the error:

def plan(
    self,
    intermediate_steps: List[Tuple[AgentAction, str]],
    callbacks: Callbacks = None,
    **kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
    inputs = {**kwargs, "intermediate_steps": intermediate_steps}
    output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
    return output

async def aplan(
    self,
    intermediate_steps: List[Tuple[AgentAction, str]],
    callbacks: Callbacks = None,
    **kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
    inputs = {**kwargs, "intermediate_steps": intermediate_steps}
    output = await self.runnable.ainvoke(inputs, config={"callbacks": callbacks})
    return output

In this modification, 'intermediate_steps' is still included in the inputs dictionary, but it is not passed again as a separate argument when invoking the runnable. This should prevent the TypeError from occurring.

Please note that this modification should be done in the LangChain source code, specifically in the AgentExecutor class in the agent.py file. If you're not comfortable modifying the source code, you might want to wait for an update from the LangChain team that addresses this issue.

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dianz17 commented 10 months ago

@dosu-bot since im unable to do that, lets focus on using the the agent without an executor just like so:

agent = initialize_agent(
    tools = tools,
    llm = chat,
    agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    verbose=True,
    handle_parsing_errors=True,
    max_iterations = 3, 
    early_stopping_method = 'generate',
    memory = memory

)

chat_history = []
question = "What are the closing hours?"
agent({"input": question, "chat_history": chat_history})

Remember that im using memory i need to pass the chat_history too.

Tha agent is able to identify which tool to use but it still gives out an error even tho im passing the input key:


> Entering new AgentExecutor chain...
 Thought: Do I need to use a tool? Yes
Action: FAQ about the store
Action Input: What are the closing hours?
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[71], line 3
      1 chat_history = []
      2 question = "What are the closing hours?"
----> 3 agent.run({"input": question, "chat_history": chat_history})
      4 # agent.run("Hola")

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:505, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    503     if len(args) != 1:
    504         raise ValueError("`run` supports only one positional argument.")
--> 505     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    506         _output_key
    507     ]
    509 if kwargs and not args:
    510     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    511         _output_key
    512     ]

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)
   1144 # We now enter the agent loop (until it returns something).
   1145 while self._should_continue(iterations, time_elapsed):
-> 1146     next_step_output = self._take_next_step(
   1147         name_to_tool_map,
   1148         color_mapping,
   1149         inputs,
   1150         intermediate_steps,
   1151         run_manager=run_manager,
   1152     )
   1153     if isinstance(next_step_output, AgentFinish):
   1154         return self._return(
   1155             next_step_output, intermediate_steps, run_manager=run_manager
   1156         )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:996, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    994         tool_run_kwargs["llm_prefix"] = ""
    995     # We then call the tool on the tool input to get an observation
--> 996     observation = tool.run(
    997         agent_action.tool_input,
    998         verbose=self.verbose,
    999         color=color,
   1000         callbacks=run_manager.get_child() if run_manager else None,
   1001         **tool_run_kwargs,
   1002     )
   1003 else:
   1004     tool_run_kwargs = self.agent.tool_run_logging_kwargs()

File /opt/conda/lib/python3.10/site-packages/langchain/tools/base.py:365, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    363 except (Exception, KeyboardInterrupt) as e:
    364     run_manager.on_tool_error(e)
--> 365     raise e
    366 else:
    367     run_manager.on_tool_end(
    368         str(observation), color=color, name=self.name, **kwargs
    369     )

File /opt/conda/lib/python3.10/site-packages/langchain/tools/base.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    334 try:
    335     tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
    336     observation = (
--> 337         self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
    338         if new_arg_supported
    339         else self._run(*tool_args, **tool_kwargs)
    340     )
    341 except ToolException as e:
    342     if not self.handle_tool_error:

File /opt/conda/lib/python3.10/site-packages/langchain/tools/base.py:510, in Tool._run(self, run_manager, *args, **kwargs)
    507 if self.func:
    508     new_argument_supported = signature(self.func).parameters.get("callbacks")
    509     return (
--> 510         self.func(
    511             *args,
    512             callbacks=run_manager.get_child() if run_manager else None,
    513             **kwargs,
    514         )
    515         if new_argument_supported
    516         else self.func(*args, **kwargs)
    517     )
    518 raise NotImplementedError("Tool does not support sync")

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:505, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    503     if len(args) != 1:
    504         raise ValueError("`run` supports only one positional argument.")
--> 505     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    506         _output_key
    507     ]
    509 if kwargs and not args:
    510     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    511         _output_key
    512     ]

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:139, in BaseRetrievalQA._call(self, inputs, run_manager)
    137 else:
    138     docs = self._get_docs(question)  # type: ignore[call-arg]
--> 139 answer = self.combine_documents_chain.run(
    140     input_documents=docs, question=question, callbacks=_run_manager.get_child()
    141 )
    143 if self.return_source_documents:
    144     return {self.output_key: answer, "source_documents": docs}

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:510, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    505     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    506         _output_key
    507     ]
    509 if kwargs and not args:
--> 510     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    511         _output_key
    512     ]
    514 if not kwargs and not args:
    515     raise ValueError(
    516         "`run` supported with either positional arguments or keyword arguments,"
    517         " but none were provided."
    518     )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:286, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    250 def __call__(
    251     self,
    252     inputs: Union[Dict[str, Any], Any],
   (...)
    259     include_run_info: bool = False,
    260 ) -> Dict[str, Any]:
    261     """Execute the chain.
    262 
    263     Args:
   (...)
    284             `Chain.output_keys`.
    285     """
--> 286     inputs = self.prep_inputs(inputs)
    287     callback_manager = CallbackManager.configure(
    288         callbacks,
    289         self.callbacks,
   (...)
    294         self.metadata,
    295     )
    296     new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:443, in Chain.prep_inputs(self, inputs)
    441     external_context = self.memory.load_memory_variables(inputs)
    442     inputs = dict(inputs, **external_context)
--> 443 self._validate_inputs(inputs)
    444 return inputs

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:195, in Chain._validate_inputs(self, inputs)
    193 missing_keys = set(self.input_keys).difference(inputs)
    194 if missing_keys:
--> 195     raise ValueError(f"Missing some input keys: {missing_keys}")

ValueError: Missing some input keys: {'input'}
dosubot[bot] commented 10 months ago

πŸ€–

Based on the information you've provided and the similar issues I found in the LangChain repository, it seems like the error you're encountering is due to the missing 'intermediate_steps' key when calling the agent.

In the LangChain framework, 'intermediate_steps' is used in several ways. It's included in the final output of the agent if the 'return_intermediate_steps' attribute is set to True. It's also passed to the 'plan' method of the 'agent' object after being prepared by the '_prepare_intermediate_steps' method, allowing the agent to decide the next action to take based on the previous steps.

To fix this issue, you should include 'intermediate_steps' in your input when calling the agent. If you don't have any intermediate steps to provide, you can pass an empty list or other suitable default value. Here's an example of how to call the agent:

input_data = {
    'input': 'your input text',
    'chat_history': [('human', 'ai')],  # replace with your actual chat history
    'intermediate_steps': []  # replace with your actual intermediate steps if any
}
output = agent(input_data)

Please replace 'your input text', ('human', 'ai'), and [] with your actual data.

Also, ensure that the 'chat_history' key in your input data aligns with the memory key used in your LangChain application. If you're using the ConversationBufferMemory class, you can control the memory key by passing it as an argument when initializing the class. For example, if you want the memory variables to be returned in the key chat_history, you can do:

memory = ConversationBufferMemory(memory_key="chat_history")

This will ensure that the 'chat_history' key in your input data aligns with the memory key used in your LangChain application, preventing potential errors.

For more details, you can refer to these sources:

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dianz17 commented 10 months ago

@dosu-bot i tried just that and now i getting the error about 'intermediate_steps':


chat_history = []

input_data = {
    'input': 'What are the closing hours?',
    'chat_history': chat_history,  
    'intermediate_steps': []  
}
output = agent(input_data)

> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[74], line 8
      1 chat_history = []
      3 input_data = {
      4     'input': 'What are the closing hours?',
      5     'chat_history': chat_history,  # replace with your actual chat history
      6     'intermediate_steps': []  # replace with your actual intermediate steps if any
      7 }
----> 8 output = agent(input_data)

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)
--> 310     raise e
    311 run_manager.on_chain_end(outputs)
    312 final_outputs: Dict[str, Any] = self.prep_outputs(
    313     inputs, outputs, return_only_outputs
    314 )

File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    297 run_manager = callback_manager.on_chain_start(
    298     dumpd(self),
    299     inputs,
    300     name=run_name,
    301 )
    302 try:
    303     outputs = (
--> 304         self._call(inputs, run_manager=run_manager)
    305         if new_arg_supported
    306         else self._call(inputs)
    307     )
    308 except BaseException as e:
    309     run_manager.on_chain_error(e)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)
   1144 # We now enter the agent loop (until it returns something).
   1145 while self._should_continue(iterations, time_elapsed):
-> 1146     next_step_output = self._take_next_step(
   1147         name_to_tool_map,
   1148         color_mapping,
   1149         inputs,
   1150         intermediate_steps,
   1151         run_manager=run_manager,
   1152     )
   1153     if isinstance(next_step_output, AgentFinish):
   1154         return self._return(
   1155             next_step_output, intermediate_steps, run_manager=run_manager
   1156         )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:933, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    930     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
    932     # Call the LLM to see what to do.
--> 933     output = self.agent.plan(
    934         intermediate_steps,
    935         callbacks=run_manager.get_child() if run_manager else None,
    936         **inputs,
    937     )
    938 except OutputParserException as e:
    939     if isinstance(self.handle_parsing_errors, bool):

TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'
dosubot[bot] commented 10 months ago

πŸ€–

Based on the error message you're receiving and the code snippet you've provided, it seems like the 'input' key is not being found in the dictionary that is being passed to the agent. This could be due to the fact that the 'input' key is not present in the dictionary at the time of calling the agent, or it could be due to a typo or case-sensitivity issue.

However, from the context provided, it appears that the agent is expecting a dictionary with keys 'input', 'chat_history', and 'agent_scratchpad'.

To fix this issue, you need to ensure that the dictionary you're passing to the agent contains all the required keys. You can modify your dictionary to include the 'agent_scratchpad' key. Here's how you can do it:

agent({"input": question, "chat_history": chat_history, "agent_scratchpad": None})

This should resolve the ValueError you're encountering.

In the LangChain Python framework, the 'input' key is used as an input variable in the creation of a chat prompt. It is used to store and pass the user's input to the language model. If no input_variables are provided, it defaults to ["input", "agent_scratchpad"]. This means that the 'input' key is used as a default input variable if none are specified. This 'input' variable is then used in the ChatPromptTemplate to create a prompt for the language model. The language model uses this prompt to generate a response based on the user's input and the agent's scratchpad.

I hope this helps! If you have any other questions or if the issue persists, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

failable commented 9 months ago

@dianz17 Have you fixed this?

dosubot[bot] commented 6 months ago

Hi, @dianz17,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you raised an issue regarding errors encountered when using a conversational agent with memory and multiple retrievals. Despite attempting suggested solutions, you continued to encounter errors related to 'intermediate_steps' and 'input' keys.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!