langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.55k stars 15.29k forks source link

Entity memory + ChatVectorDB ? #1876

Closed portkeys closed 1 year ago

portkeys commented 1 year ago

Love Langchain library, so obsessed with it lately!

I've using ChatVectorDBChain which retrieves answers from Pinecone vectorstore and it's been working very well.

But one thing I noticed is that for normal ConversationChain, you can add memory argument, which provides nice user experience because it remembers the discussed entities.

Question: can we add memory argument to ChatVectorDBChain? If it is already existed, could you point me whether below code is the right way to use it?

Thanks again so much!!😊

from langchain.chains import ChatVectorDBChain
from langchain.memory import ConversationEntityMemory

chat_with_sources = ChatVectorDBChain.from_llm(
    llm=llm,
    chain_type="stuff",
    vectorstore=vectorstore,
    return_source_documents=True
    #memory=ConversationEntityMemory(llm=llm, k=5)
)
lorepieri8 commented 1 year ago

Related: https://github.com/hwchase17/langchain/issues/1448

casualcomputer commented 1 year ago

Same question. Thanks!

phiweger commented 1 year ago

Same.

phiweger commented 1 year ago

@portkeys seems doable using llama_index + tooling from langchain:

memory = ConversationBufferMemory(memory_key="chat_history")
llm=OpenAI(temperature=0)
agent_chain = create_llama_chat_agent(
    toolkit,
    llm,
    memory=memory,
    verbose=True
)

source: https://github.com/jerryjliu/llama_index/blob/main/examples/chatbot/Chatbot_SEC.ipynb

portkeys commented 1 year ago

Thanks for sharing this solution! @phiweger

dosubot[bot] commented 1 year ago

Hi, @portkeys! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you were asking if it is possible to add a memory argument to the ChatVectorDBChain in the Langchain library. You also provided some code and asked if it is the correct way to use it. There have been a few comments from other users discussing related issues and providing a potential solution using llama_index and langchain. You thanked one user for sharing this solution.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project! Let us know if you have any further questions or concerns.