langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.85k stars 15.36k forks source link

ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps']) when using create_structured_chat_agent with chat_memory and intermediate steps #16791

Closed glejdis closed 6 months ago

glejdis commented 9 months ago

Checked other resources

Example Code

The following code:


llm = AzureChatOpenAI(
    azure_endpoint=AZURE_TEXT_ENDPOINT,
    openai_api_version=OPEN_API_VERSION,
    deployment_name=AZURE_TEXT_DEPLOYMENT, #"gpt-4_32k",
    openai_api_key=OPENAI_TEXT_API_KEY,
    openai_api_type=OPENAI_API_TYPE, #"azure",
    temperature=0
)

ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()

vector_store = AzureSearch(
        azure_search_endpoint=ai_search_endpoint,
        azure_search_key=ai_search_admin_key,
        index_name=index_name,
        embedding_function=embeddings.embed_query,
        content_key="xxx"
    )

"""Retriever that uses `Azure Cognitive Search`."""

azure_search_retriever = AzureSearchVectorStoreRetriever(
        vectorstore=vector_store,
        search_type="hybrid",
        k=3,
    )

retriever_tool = create_retriever_tool(
    azure_search_retriever,
    "Retriever",
    "Useful when you need to retrieve information from documents", 
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", """Remember the previous chats: {chat_history}. Respond to the human as helpfully and accurately as possible. You are a helpful assistant who retrieves information from a database of documents. If you cannot find the answer in the documents please write: 'I do not have the answer from the given information'. You have access to the following tools:\n\n{tools}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{\n  "action": "Final Answer",\n  "action_input": "Final response to human"\n}}\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'"""),
        ("user", "{input}\n\n{agent_scratchpad}\n (reminder to respond in a JSON blob no matter what)"),
    ]
)

memory = ConversationBufferMemory(memory_key="chat_history")

memory.save_context({"input": "hi"}, {"output": "whats up"})

try:

    agent = create_structured_chat_agent(llm, [retriever_tool], prompt)

    agent_executor = AgentExecutor(tools=[retriever_tool], 
                                   agent=agent, 
                                   verbose=True, 
                                   return_intermediate_steps=True,        
                                   handle_parsing_errors=True,
                                   max_iterations=15,
                                   memory=memory
                                )

except Exception as e:
    print(e)
    print("error instantiating the agent")

text = "Who is Julia Roberts?"
answer = agent_executor.invoke(
    {
        "input": text,
    }
)
answer

Error Message and Stack Trace (if applicable)

ValueError Traceback (most recent call last) File , line 64 63 text = "Who is Julia Roberts?" ---> 64 answer = agent_executor.invoke( 65 { 66 "input": text, 67 } 68 ) 69 answer

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:164, in Chain.invoke(self, input, config, **kwargs) 162 raise e 163 run_manager.on_chain_end(outputs) --> 164 final_outputs: Dict[str, Any] = self.prep_outputs( 165 inputs, outputs, return_only_outputs 166 ) 167 if include_run_info: 168 final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:440, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs) 438 self._validate_outputs(outputs) 439 if self.memory is not None: --> 440 self.memory.save_context(inputs, outputs) 441 if return_only_outputs: 442 return outputs

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37, in BaseChatMemory.save_context(self, inputs, outputs) 35 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: 36 """Save context from this conversation to buffer.""" ---> 37 input_str, output_str = self._get_input_output(inputs, outputs) 38 self.chat_memory.add_user_message(input_str) 39 self.chat_memory.add_ai_message(output_str)

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:29, in BaseChatMemory._get_input_output(self, inputs, outputs) 27 if self.output_key is None: 28 if len(outputs) != 1: ---> 29 raise ValueError(f"One output key expected, got {outputs.keys()}") 30 output_key = list(outputs.keys())[0] 31 else:

ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps'])

Description

I am trying to output the intermediate steps as well as save the previous chat history, but it seems I cannot do both at the same time. The code attached above works when return_intermediate_steps is set to False.

System Info

langchain==0.1.1 openai==1.7.0

dosubot[bot] commented 9 months ago

πŸ€–

The error you're encountering is due to the fact that the ConversationBufferMemory object in LangChain is expecting a single output key, but it's receiving a dictionary with multiple keys when return_intermediate_steps is set to True. This is because the AgentExecutor returns both the final output and the intermediate steps when return_intermediate_steps is set to True.

Currently, LangChain does not support saving both the final output and the intermediate steps in the conversation memory. The ConversationBufferMemory object is designed to save only the final output of the conversation.

However, you can create a custom memory object that supports saving both the final output and the intermediate steps. Here's an example of how you can do this:

from langchain.memory import BaseChatMemory

class CustomChatMemory(BaseChatMemory):
    def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:
        input_str, output_str = self._get_input_output(inputs, outputs)
        self.chat_memory.add_user_message(input_str)
        self.chat_memory.add_ai_message(output_str)
        if 'intermediate_steps' in outputs:
            self.chat_memory.add_ai_message(str(outputs['intermediate_steps']))

    def _get_input_output(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> Tuple[str, str]:
        input_str = inputs.get(self.input_key, "")
        output_str = outputs.get(self.output_key, "")
        return input_str, output_str

In this custom memory object, the save_context method checks if intermediate_steps is in the outputs and if it is, it saves it in the conversation memory.

Then, you can use this custom memory object in your AgentExecutor:

memory = CustomChatMemory(memory_key="chat_history")

agent_executor = AgentExecutor(tools=[retriever_tool], 
                               agent=agent, 
                               verbose=True, 
                               return_intermediate_steps=True,        
                               handle_parsing_errors=True,
                               max_iterations=15,
                               memory=memory
                            )

Please note that this is a workaround and might not be the best solution. It's always a good idea to check with the LangChain community or the repository maintainers for the best practices and solutions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

glejdis commented 9 months ago

@dosu-bot TypeError: Can't instantiate abstract class CustomChatMemory with abstract methods load_memory_variables, memory_variables

TypeError Traceback (most recent call last) File , line 55 39 retriever_tool = create_retriever_tool( 40 azure_search_retriever, 41 "Retriever", 42 "Useful when you need to retrieve information from documents", 43 ) 46 prompt = ChatPromptTemplate.from_messages( 47 [ 48 ("system", """Remember the previous chats: {chat_history}. Respond to the human as helpfully and accurately as possible. You are a helpful assistant who retrieves information from a database of documents. If you cannot find the answer in the documents please write: 'I do not have the answer from the given information'. You have access to the following tools:\n\n{tools}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n\n{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}\n\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n\n$JSON_BLOB\n\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n\n{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:$JSON_BLOB```then Observation'"""), (...) 52 ] 53 ) ---> 55 memory = CustomChatMemory(memory_key="chat_history") 57 # ConversationBufferWindowMemory( k=1) using the last k interations 59 memory.save_context({"input": "hi"}, {"output": "whats up"})

TypeError: Can't instantiate abstract class CustomChatMemory with abstract methods load_memory_variables, memory_variables

dosubot[bot] commented 9 months ago

πŸ€–

The error you're encountering is due to the fact that the ConversationBufferMemory class in LangChain is designed to handle a single output key. When you set return_intermediate_steps=True in the AgentExecutor, it returns a dictionary with multiple keys, which is not compatible with the ConversationBufferMemory class.

To resolve this issue, you could create a custom memory class that can handle multiple output keys. Here's an example of how you could do this:

from langchain.memory import BaseChatMemory

class CustomChatMemory(BaseChatMemory):
    def _get_input_output(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> Tuple[str, str]:
        input_str = self._format_input(inputs)
        output_str = self._format_output(outputs)
        return input_str, output_str

    def _format_output(self, outputs: Dict[str, Any]) -> str:
        return "\n".join(f"{k}: {v}" for k, v in outputs.items())

In this custom memory class, the _format_output method is overridden to format the outputs as a string where each key-value pair is on a new line. This allows the class to handle multiple output keys.

Then, you can use this custom memory class in your code like this:

memory = CustomChatMemory(memory_key="chat_history")

Please note that this is a simple example and you might need to adjust it according to your specific needs.

As for the TypeError: Can't instantiate abstract class CustomChatMemory with abstract methods load_memory_variables, memory_variables, it seems like you're trying to instantiate an abstract class CustomChatMemory which has abstract methods load_memory_variables and memory_variables. In Python, you cannot instantiate an abstract class that has abstract methods. You need to provide concrete implementations for these methods in your CustomChatMemory class.

Here's an example of how you could do this:

class CustomChatMemory(BaseChatMemory):
    def load_memory_variables(self):
        # Implement this method according to your needs
        pass

    def memory_variables(self):
        # Implement this method according to your needs
        pass

    # ... rest of your code ...

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

PJarbas commented 2 months ago

@glejdis you can try:

memory = ConversationBufferMemory(memory_key="chat_history") memory.output_key = "output"