Closed reddiamond1234 closed 8 months ago
🤖
Hello @reddiamond1234,
Nice to meet you! I'm Dosu, an AI bot designed to assist with your questions, help solve bugs, and even guide you on how to become a contributor to the LangChain repository. While we wait for a human maintainer, I'm here to ensure your queries are addressed promptly.
I'm currently reviewing the issue you've raised regarding the implementation of memory for RetrievalQA.from_chain_type()
. I'll provide a full response to your issue shortly.
🤖
Hello @reddiamond1234,
Nice to meet you! I'm Dosu, an AI bot here to assist you with your issues, answer your queries, and guide you on your journey to becoming a LangChain contributor. While we wait for a human maintainer, feel free to share any additional information or ask any questions. I'm here to help!
Thank you for your feature request. I understand that you would like to add memory to the RetrievalQA.from_chain_type()
method in the LangChain framework to allow for contextual questioning.
Based on the information provided, it seems that the LangChain framework already has a GenerativeAgentMemory
class in the memory.py
file which is designed to handle memory-related operations. This class has methods for adding memories, fetching related memories, and reflecting on recent observations to generate insights. You could leverage this existing class to add a memory feature to the RetrievalQA.from_chain_type()
method.
Here's an example of how you could do this:
from langchain_experimental.generative_agents.memory import GenerativeAgentMemory
class BaseRetrievalQA(Chain):
"""Base class for question-answering chains."""
# Add a memory attribute
memory: GenerativeAgentMemory
...
@classmethod
def from_chain_type(
cls,
llm: BaseLanguageModel,
chain_type: str = "stuff",
chain_type_kwargs: Optional[dict] = None,
**kwargs: Any,
) -> BaseRetrievalQA:
"""Load chain from chain type."""
_chain_type_kwargs = chain_type_kwargs or {}
combine_documents_chain = load_qa_chain(
llm, chain_type=chain_type, **_chain_type_kwargs
)
return cls(combine_documents_chain=combine_documents_chain, **kwargs)
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
"""Run get_relevant_text and llm on input query.
If chain has 'return_source_documents' as 'True', returns
the retrieved documents as well under the key 'source_documents'.
"""
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
question = inputs[self.input_key]
accepts_run_manager = (
"run_manager" in inspect.signature(self._get_docs).parameters
)
if accepts_run_manager:
docs = self._get_docs(question, run_manager=_run_manager)
else:
docs = self._get_docs(question) # type: ignore[call-arg]
answer = self.combine_documents_chain.run(
input_documents=docs, question=question, callbacks=_run_manager.get_child()
)
# Update memory with the question and answer
self.memory.add_memory(f"Question: {question}\nAnswer: {answer}")
if self.return_source_documents:
return {self.output_key: answer, "source_documents": docs}
else:
return {self.output_key: answer}
This code adds a memory
attribute to the BaseRetrievalQA
class and updates it with the question and answer each time the _call
method is run. This allows for contextual questioning as the model can now reference previous questions and answers.
Please note that this is a high-level suggestion and the actual implementation may vary based on the specific requirements of your application and the current state of the LangChain codebase.
For more information, you can refer to the following sources:
I hope this helps! If you have any further questions or need additional clarification, please don't hesitate to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Hi, @reddiamond1234,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you raised a request for the addition of memory for the RetrievalQA.from_chain_type()
function in the RAG chain, expressing a desire to ask questions in context but noting a lack of knowledge about LLMs to contribute. Dosu, an AI bot, responded with a high-level suggestion to leverage the existing GenerativeAgentMemory
class to add memory to the method, providing example code and links to relevant sources for further information.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you!
Feature request
can you add memory for RetrievalQA.from_chain_type(). I haven't seen any implementation of memory for this kind of RAG chain. It would be nice to have memory and ask questions in context.
Motivation
I just can't get any memory to work with RetrievalQA.from_chain_type().
Your contribution
Not right now... I don't have all required knowledge about LLMs