Closed glejdis closed 6 months ago
π€
The error you're encountering is due to the fact that the ConversationBufferMemory
object in LangChain is expecting a single output key, but it's receiving a dictionary with multiple keys when return_intermediate_steps
is set to True
. This is because the AgentExecutor
returns both the final output and the intermediate steps when return_intermediate_steps
is set to True
.
Currently, LangChain does not support saving both the final output and the intermediate steps in the conversation memory. The ConversationBufferMemory
object is designed to save only the final output of the conversation.
However, you can create a custom memory object that supports saving both the final output and the intermediate steps. Here's an example of how you can do this:
from langchain.memory import BaseChatMemory
class CustomChatMemory(BaseChatMemory):
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:
input_str, output_str = self._get_input_output(inputs, outputs)
self.chat_memory.add_user_message(input_str)
self.chat_memory.add_ai_message(output_str)
if 'intermediate_steps' in outputs:
self.chat_memory.add_ai_message(str(outputs['intermediate_steps']))
def _get_input_output(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> Tuple[str, str]:
input_str = inputs.get(self.input_key, "")
output_str = outputs.get(self.output_key, "")
return input_str, output_str
In this custom memory object, the save_context
method checks if intermediate_steps
is in the outputs and if it is, it saves it in the conversation memory.
Then, you can use this custom memory object in your AgentExecutor
:
memory = CustomChatMemory(memory_key="chat_history")
agent_executor = AgentExecutor(tools=[retriever_tool],
agent=agent,
verbose=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
max_iterations=15,
memory=memory
)
Please note that this is a workaround and might not be the best solution. It's always a good idea to check with the LangChain community or the repository maintainers for the best practices and solutions.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
TypeError Traceback (most recent call last)
File \n{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}\n
\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n\n$JSON_BLOB\n
\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n\n{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:
$JSON_BLOB```then Observation'"""),
(...)
52 ]
53 )
---> 55 memory = CustomChatMemory(memory_key="chat_history")
57 # ConversationBufferWindowMemory( k=1) using the last k interations
59 memory.save_context({"input": "hi"}, {"output": "whats up"})
TypeError: Can't instantiate abstract class CustomChatMemory with abstract methods load_memory_variables, memory_variables
π€
The error you're encountering is due to the fact that the ConversationBufferMemory
class in LangChain is designed to handle a single output key. When you set return_intermediate_steps=True
in the AgentExecutor
, it returns a dictionary with multiple keys, which is not compatible with the ConversationBufferMemory
class.
To resolve this issue, you could create a custom memory class that can handle multiple output keys. Here's an example of how you could do this:
from langchain.memory import BaseChatMemory
class CustomChatMemory(BaseChatMemory):
def _get_input_output(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> Tuple[str, str]:
input_str = self._format_input(inputs)
output_str = self._format_output(outputs)
return input_str, output_str
def _format_output(self, outputs: Dict[str, Any]) -> str:
return "\n".join(f"{k}: {v}" for k, v in outputs.items())
In this custom memory class, the _format_output
method is overridden to format the outputs as a string where each key-value pair is on a new line. This allows the class to handle multiple output keys.
Then, you can use this custom memory class in your code like this:
memory = CustomChatMemory(memory_key="chat_history")
Please note that this is a simple example and you might need to adjust it according to your specific needs.
As for the TypeError: Can't instantiate abstract class CustomChatMemory with abstract methods load_memory_variables, memory_variables
, it seems like you're trying to instantiate an abstract class CustomChatMemory
which has abstract methods load_memory_variables
and memory_variables
. In Python, you cannot instantiate an abstract class that has abstract methods. You need to provide concrete implementations for these methods in your CustomChatMemory
class.
Here's an example of how you could do this:
class CustomChatMemory(BaseChatMemory):
def load_memory_variables(self):
# Implement this method according to your needs
pass
def memory_variables(self):
# Implement this method according to your needs
pass
# ... rest of your code ...
I hope this helps! If you have any other questions or need further clarification, feel free to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@glejdis you can try:
memory = ConversationBufferMemory(memory_key="chat_history") memory.output_key = "output"
Checked other resources
Example Code
The following code:
Error Message and Stack Trace (if applicable)
ValueError Traceback (most recent call last) File, line 64
63 text = "Who is Julia Roberts?"
---> 64 answer = agent_executor.invoke(
65 {
66 "input": text,
67 }
68 )
69 answer
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:164, in Chain.invoke(self, input, config, **kwargs) 162 raise e 163 run_manager.on_chain_end(outputs) --> 164 final_outputs: Dict[str, Any] = self.prep_outputs( 165 inputs, outputs, return_only_outputs 166 ) 167 if include_run_info: 168 final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:440, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs) 438 self._validate_outputs(outputs) 439 if self.memory is not None: --> 440 self.memory.save_context(inputs, outputs) 441 if return_only_outputs: 442 return outputs
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37, in BaseChatMemory.save_context(self, inputs, outputs) 35 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: 36 """Save context from this conversation to buffer.""" ---> 37 input_str, output_str = self._get_input_output(inputs, outputs) 38 self.chat_memory.add_user_message(input_str) 39 self.chat_memory.add_ai_message(output_str)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:29, in BaseChatMemory._get_input_output(self, inputs, outputs) 27 if self.output_key is None: 28 if len(outputs) != 1: ---> 29 raise ValueError(f"One output key expected, got {outputs.keys()}") 30 output_key = list(outputs.keys())[0] 31 else:
ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps'])
Description
I am trying to output the intermediate steps as well as save the previous chat history, but it seems I cannot do both at the same time. The code attached above works when return_intermediate_steps is set to False.
System Info
langchain==0.1.1 openai==1.7.0