langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.34k stars 2.09k forks source link

Issues with ConversationalRetrievalQA chain #1697

Closed EmilioJD closed 1 year ago

EmilioJD commented 1 year ago

Hello, I'm working on implementing a website using theConversationRetrievalQA chain but continue to have errors when using it. It works for retrieving documents from the database (I am using Supabase for the VectorStore), but doesn't seem to support loading in the chat history (isn't able to reference previous things from the conversation although I am successfully passing in a chat history to buffer memory). I was able to successfully use the standard ConversationChain with no problem. Wanted to know if I am approaching this incorrectly but I am starting to believe it's a ConversationalRetrievalQA chain issue. Code snippet below, let me know if you would like more context:

`

const memory = new BufferMemory({
  memoryKey: "chat_history", // Must be set to "chat_history"
  chatHistory: chatHistory,
  returnMessages: true,
  inputKey: "question", // The key for the input to the chain
  outputKey: "text", // The key for the final conversational output of the chain
})

const chain = ConversationalRetrievalQAChain.fromLLM(
  model,
  vectorStore.asRetriever(),
  {
    memory: memory,
  },
);

let question = userQuery;
const response = await chain.call({question});

`

anthonycoded commented 1 year ago

Did you get a typescript error when adding the memory property?Im running into some weird issues with ConversationalRetrievalQAChain as well.

code:
const chain = ConversationalRetrievalQAChain.fromLLM(model, retriever, { memory: new BufferMemory({ memoryKey: "chat_history", // Must be set to "chat_history" }), });

Error Argument of type '{ memory: BufferMemory; }' is not assignable to parameter of type 'Partial<Omit<RetrievalQAChainInput, "combineDocumentsChain" | "index">> & StuffQAChainParams'.

brianyun commented 1 year ago

I am having this issue as well. I implemented my memory in ConversationalRetrievalQA like provided in https://js.langchain.com/docs/modules/chains/index_related_chains/conversational_retrieval, But my chat interface has no insight of memory.

const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorstore.asRetriever(), { qaTemplate: QA_PROMPT, questionGeneratorTemplate: CONDENSE_PROMPT, returnSourceDocuments: false, memory: new BufferMemory({ memoryKey: 'chat_history', }), } );

EmilioJD commented 1 year ago

Did you get a typescript error when adding the memory property?Im running into some weird issues with ConversationalRetrievalQAChain as well.

code: const chain = ConversationalRetrievalQAChain.fromLLM(model, retriever, { memory: new BufferMemory({ memoryKey: "chat_history", // Must be set to "chat_history" }), });

Error Argument of type '{ memory: BufferMemory; }' is not assignable to parameter of type 'Partial<Omit<RetrievalQAChainInput, "combineDocumentsChain" | "index">> & StuffQAChainParams'.

I did not receive this error, perhaps try making sure you are on the latest version of langchain since memory was recently introduced to this chain?

I am having this issue as well. I implemented my memory in ConversationalRetrievalQA like provided in https://js.langchain.com/docs/modules/chains/index_related_chains/conversational_retrieval, But my chat interface has no insight of memory.

const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorstore.asRetriever(), { qaTemplate: QA_PROMPT, questionGeneratorTemplate: CONDENSE_PROMPT, returnSourceDocuments: false, memory: new BufferMemory({ memoryKey: 'chat_history', }), } );

On that note, the way I fixed my problem was by looking at the PR in which built-in memory was introduced to the ConversationalRetrievalQAChain nd realizing that the mock dataset is much richer than the one given in the Supabase Vector Store docs.

[ "Mitochondria are the powerhouse of the cell", "Foo is red", "Bar is red", "Buildings are made out of brick", "Mitochondria are made of lipids", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], vs

["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }],

Maybe trying a more salient dataset is necessary for the RetrievalQA chain to work well. Best of luck.