Closed alisola21 closed 12 months ago
Hi, @alisola21! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you reported an issue regarding the memory in the conversational agent not retaining information correctly when asked unrelated questions. It seems like you're unsure if this is a problem with the memory or your code. However, there hasn't been any activity or comments on the issue yet.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and cooperation. Let us know if you have any further questions or concerns.
I've solved this mess by re-implementing the whole thing which keep the standard load_qa_chain and by the chat history directly at the bottom of the context with the document retrieved. In this way I don't have to use a custom prompt (which will not work anymore if I change the chain mode to a value different than 'stuff').
System Info
Who can help?
No response
Information
Related Components
Reproduction
I would like to report a problem i an experiencing using memory, in particular ConversationBufferMemory
I am developing a conversational agent capable of correctly answering very technical questions, contained in documentations. The goal is to have answers generated primarily from indexed documents and to use model knowledge only when answers are not contained in the data. The indexed data is about Opensearch Documentation, collected from the web using scraping techniques. Next, I created the embedding using Openai's embeddings and indexed the data in the vector store, following the instructions provided by the documentation,
Finally, I created the conversational agent using the
ConversationalRetrievalChain
which takes as input the prompt, the memory (ConversationBufferMemory
) the model (gpt-3.5-turbo) and the retriever based on the indexed data.Expected behavior
Testing the code with questions about opensearch documentation, the results are correct and memory seems to work. In fact, the model can tell that the question "and with Tarball?" refers to the installation of Opensearch
However, when asked questions not related to the indexed data (e.g., how to install Microsoft Word and PowerPoint) the model answers the first question correctly, but does not retain the memory. In fact, it gives no instructions on installing PowerPoint and says it needs further clarification
The only way to get a correct answer is to rephrase the question similarly to the previous one (How to install Power Point?).
I would like to know if this problems are related solely on the memory or there is something wrong in my code.