langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
93.89k stars 15.13k forks source link

How can we selectively pass memory to specific tools without passing it to all tools? #13112

Closed Chetan8000 closed 8 months ago

Chetan8000 commented 11 months ago

Issue you'd like to raise.

for example in my below case


english_tools = [

        Tool(name="SomeNAME_1",
             func=lambda q: app.finance_chain.run(q),
             description=" Some app related description ",
             return_direct=True,
             coroutine=lambda q: app.finance_chain.arun(q),
             ),
        Tool(name="SomeNAME_2",
             func=lambda q: app.rqa(q),
             description=" Some app related description ",
             coroutine=lambda q: app.rqa_english.arun(q),
             return_direct=True
             ),
        Tool.from_function(
            name="SomeNAME_3",
            func=lambda q: app.pd_agent(q),
            description=" Some app related description",
            coroutine=lambda q: app.pd_agent.arun(q),
        )
    ]```

    so when SomeNAME_3 invoke i don't wann pass memory to this tool

### Suggestion:

_No response_
dosubot[bot] commented 11 months ago

🤖

Hello @Chetan8000,

I'm an AI assistant designed to help with issues related to the LangChain repository. I can assist you with solving bugs, answering questions, and providing guidance on contributing to the project. While we're waiting for a human maintainer, I'm here to provide initial help.

Just wanted to let you know I'm currently looking into your issue regarding the selective memory passing to specific tools. I'll get back to you with a more comprehensive response shortly.

dosubot[bot] commented 11 months ago

🤖

To selectively pass memory to specific tools in your application, you can modify the load_memory_variables method in the GenerativeAgentMemory class. This method currently fetches relevant memories for all queries and returns them. You can add a condition to check the name of the tool and only fetch memories for the tools you want.

Here's how you can modify the load_memory_variables method:

def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
    """Return key-value pairs given the text input to the chain."""
    queries = inputs.get(self.queries_key)
    tool_name = inputs.get('tool_name')  # assuming tool_name is passed in inputs
    now = inputs.get(self.now_key)
    if queries is not None and tool_name in ['SomeNAME_1', 'SomeNAME_2']:
        relevant_memories = [
            mem for query in queries for mem in self.fetch_memories(query, now=now)
        ]
        return {
            self.relevant_memories_key: self.format_memories_detail(
                relevant_memories
            ),
            self.relevant_memories_simple_key: self.format_memories_simple(
                relevant_memories
            ),
        }

    most_recent_memories_token = inputs.get(self.most_recent_memories_token_key)
    if most_recent_memories_token is not None:
        return {
            self.most_recent_memories_key: self._get_memories_until_limit(
                most_recent_memories_token
            )
        }
    return {}

In this modification, we're assuming that the name of the tool is passed in the inputs under the key 'tool_name'. If the tool_name is either 'SomeNAME_1' or 'SomeNAME_2', the method fetches the relevant memories. If the tool_name is 'SomeNAME_3', it won't fetch any memories. You can adjust this condition to suit your needs.

This modification is based on the current implementation of the load_memory_variables method in the GenerativeAgentMemory class, which you can find in the libs/experimental/langchain_experimental/generative_agents/memory.py file in the LangChain repository.

Please note that this is a suggested modification and might need further adjustments based on your specific use case and the overall structure of your application. If you encounter any issues or need further assistance, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Chetan8000 commented 11 months ago
english_tools = [

        Tool(name="SomeNAME_1",
             func=lambda q: app.finance_chain.run(q),
             description=" Some app related description ",
             return_direct=True,
             coroutine=lambda q: app.finance_chain.arun(q),
             ),
        Tool(name="SomeNAME_2",
             func=lambda q: app.rqa(q),
             description=" Some app related description ",
             coroutine=lambda q: app.rqa_english.arun(q),
             return_direct=True
             ),
        Tool.from_function(
            name="SomeNAME_3",
            func=lambda q: app.pd_agent(q),
            description=" Some app related description",
            coroutine=lambda q: app.pd_agent.arun(q),
        )
    ]

memory = ConversationBufferWindowMemory(
        memory_key="chat_history", chat_memory=message_history, k=3, return_messages=True)

english_agent = initialize_agent(agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, llm=llm,
                                         tools=english_tools,
                                         verbose=True, memory=memory, handle_parsing_errors=True)

How do I pass tools specifc memory

dosubot[bot] commented 8 months ago

Hi, @Chetan8000,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, the issue involves selectively passing memory to specific tools without passing it to all tools. Dosubot provided a suggested modification to the load_memory_variables method in the GenerativeAgentMemory class, which seems to provide a solution to the issue. Chetan8000, you later shared a Python code snippet and asked how to pass specific memory to tools.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!