Closed alberduris closed 1 year ago
Hi,
I have realized that the chat history is not included in the prompt. The logic for this is implemented, but the model is not aware of previous interactions (see the screenshot attached).
//Ask a question const response = await chain.call({ question: sanitizedQuestion, chat_history: history || [], });
I feel that this is a bug or weird behaviour of Langchain, but I'm not sure. What do you think? Can you spot a bug? Is there any solution?
Console.log on the frontend or api route to verify the chathistory isn't being passed on.
Console.log on the frontend or api route to verify the chathistory isn't being passed on.
I already did that. It is being passed on, but the bot is not aware, so it is not receiving it.
That's why I think it may be something related to Langchain's internal workings.
I'm having the same issue, after the first message, the chatbot doesn't catch the conversation context. However, when working directly with chatGPT some features and information are forgotten too.
I am having the same issue.
The reason is how the prompts are treated internally by Langchain. Specifically, the QA generator prompt. If someone wants me to deepen the explanation, please let me know.
@alberduris I understand the langchain process, but I want to modify it to use SystemChatMessage, HumanChatMessage,AIChatMessage as shown in this article https://js.langchain.com/docs/modules/models/chat/additional_functionality How can I do that?
What's the problem? Just pass a list of Messages to the chat history. For example:
[
new SystemChatMessage(
"You are a helpful assistant that translates English to French."
),
new HumanChatMessage("Translate: I love programming."),
]
Check which Langchain class are you using, because you may need to use the text property: new HumanChatMessage("Translate: I love programming.").text
@alberduris Even if I put it in chat_history, it ends up changing to a single question, so I don't think I understand the context correctly. Is there a solution? I wonder if I should use a different chain altogether. What I want to do is use it with a vectorStore, but I don't know how.
const response = await chain.call({
question: sanitizedQuestion,
chat_history: last(splitHistory) || [],
});
The reason is how the prompts are treated internally by Langchain. Specifically, the QA generator prompt. If someone wants me to deepen the explanation, please let me know.
If you try a different chain you may get it working. You can also create custom prompts with the PromptTemplate
class by langchain.
Here is what it means as chat_history.
If you input as this
`{ "question": "why important?",
"history": [ "Upward management"
] }`
The answer will be "Upward management is important, because you want to earn trust from your boss....."
instead if you do this:
`{ "question": "what is it?",
"history": [ "Upward management"
] }`
The answer will be "Upward management is a skill to manage your boss's expectation....."
As you can see, the history is actually used to combine with current chat to format a standalone question, and then feed to LLM to get answer, please note that the chat history is NOT used as context!! The context is actually the document that you retrived from vector store.
Hope it helps
Hello,
Sorry for spamming but is there a solution to this? I need the bot to retrieve content from the document but also keep the history so that conversations are related. I need the context.
PLEASE HELP!
having the same issue with python as well
Facing the same issue as well, is there a different chain that works with vector stores, memory, and prompts?
+1
Facing the same issue.
Same issue. LangChain behaves weirdly when using combinations of QARetrieval and Memory together. When logging verbose, it sometimes doesn't add context and therefore looses about what it just spoke.
By using both togehter, sometimes when running a chain to rephrase a chat history into a single question, it does it, goes to the next step, answers itself and then comes back saying "I don't know". (mainly gpt-3.5-turbo)
It's a little bit bugged.
I'm trying to do the chains manually now. Bonus points that I at least understand what it does more throughoutly :)
Ticket should be re opened.
Made a new ticket: https://github.com/mayooear/gpt4-pdf-chatbot-langchain/issues/393
Hello everyone. Could someone improve the code to chat with history?
I am using "ConversationalRetrievalQAChain" as in the code of this repo, in "makechain.ts". Everything works fine, questions and answers about the document. But if you ask a new question about the previous question or answer. You get the message about no information found in the context.
I have realized that the history is saved, but as an output, so that the web page always shows the previous questions and answers (only to render the questions and answers). But there is no memory that allows the bot to remember about the previous context.
According to the Langchain documentation, I have added BufferMemory, but have gotten no results when I have asked about the context above.
const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorstore.asRetriever(),
{
verbose: true,
qaTemplate: QA_PROMPT,
questionGeneratorTemplate: CONDENSE_PROMPT,
returnSourceDocuments: true,
memory: new BufferMemory({
memoryKey: "chat_history",
inputKey: "question",
outputKey: "text",
returnMessages: true,
}),
},
);
I really appreciate any help or any ideas on how to proceed.
Hello everyone. Could someone improve the code to chat with history?
I am using "ConversationalRetrievalQAChain" as in the code of this repo, in "makechain.ts". Everything works fine, questions and answers about the document. But if you ask a new question about the previous question or answer. You get the message about no information found in the context.
I have realized that the history is saved, but as an output, so that the web page always shows the previous questions and answers (only to render the questions and answers). But there is no memory that allows the bot to remember about the previous context.
According to the Langchain documentation, I have added BufferMemory, but have gotten no results when I have asked about the context above.
const chain = ConversationalRetrievalQAChain.fromLLM( model, vectorstore.asRetriever(), { verbose: true, qaTemplate: QA_PROMPT, questionGeneratorTemplate: CONDENSE_PROMPT, returnSourceDocuments: true, memory: new BufferMemory({ memoryKey: "chat_history", inputKey: "question", outputKey: "text", returnMessages: true, }), }, );
I really appreciate any help or any ideas on how to proceed.
Did you ever find a solution? I am still having this problem
Hello everyone. Could someone improve the code to chat with history? I am using "ConversationalRetrievalQAChain" as in the code of this repo, in "makechain.ts". Everything works fine, questions and answers about the document. But if you ask a new question about the previous question or answer. You get the message about no information found in the context. I have realized that the history is saved, but as an output, so that the web page always shows the previous questions and answers (only to render the questions and answers). But there is no memory that allows the bot to remember about the previous context. According to the Langchain documentation, I have added BufferMemory, but have gotten no results when I have asked about the context above.
const chain = ConversationalRetrievalQAChain.fromLLM( model, vectorstore.asRetriever(), { verbose: true, qaTemplate: QA_PROMPT, questionGeneratorTemplate: CONDENSE_PROMPT, returnSourceDocuments: true, memory: new BufferMemory({ memoryKey: "chat_history", inputKey: "question", outputKey: "text", returnMessages: true, }), }, );
I really appreciate any help or any ideas on how to proceed.
Did you ever find a solution? I am still having this problem
Hi, After seeing other Q&A in GitHub and reading some documentation of langchain. I solve this by adding a few lines of code, like this (although it's not finished at all).
export history from "chat.ts" and then import history in "make chain.ts":
const messageHistory = new ChatMessageHistory(history);
const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorstore.asRetriever(),
{
verbose: true,
qaTemplate: QA_PROMPT,
questionGeneratorTemplate: CONDENSE_PROMPT,
returnSourceDocuments: true, //The number of source documents returned is 4 by default
memory: new BufferWindowMemory({
k: 1,
memoryKey: "chat_history",
inputKey: "question",
outputKey: "text",
returnMessages: true,
chatHistory: messageHistory,
}),
},
);
Hope this solve your problem with history chat. Maybe then you need to control the amount of chat history due token pricing in OpenAI.
Hi,
I have realized that the chat history is not included in the prompt. The logic for this is implemented, but the model is not aware of previous interactions (see the screenshot attached).
I feel that this is a bug or weird behaviour of Langchain, but I'm not sure. What do you think? Can you spot a bug? Is there any solution?