Closed Rafcin closed 1 year ago
I'm trying to make this bot work like GPT would, where I can ask it questions or hold a conversation with it, but I want to add tools and vectorstore to its context so it can better answer questions. Here is the setup I have now; however, it doesn't remember anything I tell it.
import {
ConversationalRetrievalQAChain,
VectorDBQAChain,
} from "langchain/chains";
import { ChainTool } from "langchain/tools";
import { VectorStore } from "langchain/vectorstores";
import { openai } from "../../openai";
export const webVectorChain = (vectorstore: VectorStore) => {
const vectorchain = VectorDBQAChain.fromLLM(openai, vectorstore);
const chain = new ChainTool({
name: "vector-chain",
description:
"QA chain that uses a vector store to retrieve documents and then uses OpenAI to answer questions. This chain includes web access and other tools.",
chain: vectorchain,
});
return chain;
};
import { initializeAgentExecutor } from "langchain/agents";
import { Calculator, SerpAPI } from "langchain/tools";
import { VectorStore } from "langchain/vectorstores";
import { openai } from "../openai";
import { webVectorChain } from "./webvectorchain";
export const makeChain = async (vectorstore: VectorStore) => {
const tools = [new Calculator(), webVectorChain(vectorstore)];
const executor = await initializeAgentExecutor(
tools,
openai,
"zero-shot-react-description"
);
console.log("Loaded agent.");
return executor;
};
NextJs API route:
import { makeChain } from "@/utils/langchain/chains";
import { supabaseClient } from "@/utils/langchain/supabase";
import { ChatMessageHistory } from "langchain/memory";
import { OpenAIEmbeddings } from "langchain/embeddings";
import { SupabaseVectorStore } from "langchain/vectorstores";
import type { NextApiRequest, NextApiResponse } from "next";
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
const { question, history } = req.body;
if (!question) {
return res.status(400).json({ message: "No question in the request" });
}
// OpenAI recommends replacing newlines with spaces for best results
const sanitizedQuestion = question.trim().replaceAll("\n", " ");
/* create vectorstore */
const vectorStore = await SupabaseVectorStore.fromExistingIndex(
new OpenAIEmbeddings(),
{
client: supabaseClient,
}
);
const chain = await makeChain(vectorStore);
try {
//Ask a question
console.log("History", history);
const response = await chain.call({
input: sanitizedQuestion,
question: sanitizedQuestion,
chat_history: new ChatMessageHistory(history || []),
});
console.log("response", response);
// send the response back to the client
res.status(200).send(JSON.stringify(response));
} catch (error) {
console.log("error", error);
// send a generic error response to the client
res.status(500).json({ message: "An error occurred" });
}
}
Chat history is an array of ChatMessageHistory and each text is either AIChatMessage
or HumanChatMessage
It seems like we are unable to add a memory to the ChatVectorDBQAChain even though it extends from BaseChain. Anyone have this working? I assume there would be some issues with fitting the vector content into the context window and might require summarization or another technique.
@pirtlj Take a look at this repo that is using Langchain reading PDF files into a Pinecone index. [https://github.com/mayooear/gpt4-pdf-chatbot-langchain]. gpt4-chatbot-langcahin
So I've pivoted my approach. I've followed the examples for initializeAgentExecutor with chat-conversational-react-description
and this seems to be fine, Calculate, and Serp work; however, this also can't hold a history properly. I wonder why. I tested "hi i'm bob" and then "what's my name" and it had no idea what to do.
const executor = await initializeAgentExecutor(
tools,
model,
"chat-conversational-react-description",
true
);
executor.memory = new BufferMemory({
returnMessages: true,
memoryKey: "chat_history",
inputKey: "input",
chatHistory: new ChatMessageHistory(history),
});
Also I have a question, does the agent executor reword my input query? Because I would input the prompt I had and it would regard the current year, and then I would get a new query logged to console that said something else and it rewrote the year as 2021.
@Rafcin ,
Did you manage to get this(Agent.executor with VectorStore and Memory) to work at all? I'm stuck on this exact issue as well. Seems like a glaring oversight somehow.
Any information is greatly appreciated
@LiveBacteria
I didn't get it to work, I ended up joining the alpha program for GPT plugins which solved my issue 😅. However I still have the code so I can look into the problem this weekend if you'd like! I think it boiled down to some issues with Langchain itself
I opened an issue regarding this issue as well, however I am now getting an error where somehow the way it's being called is attempting to redefine it's run() function.
If you had any progress, please do! Stuck for days over here haha
How did the alpha assist your project? Just curious
Hi, @Rafcin! I'm here to help the LangChain team manage their backlog and I wanted to let you know that we are marking this issue as stale.
Based on my understanding, you are asking for help integrating agent tools like a calculator and search into the ConversationalRetrievalQAChain. You provided a version of the chain you made, but it is not working properly. Additionally, you mentioned that you are trying to create a chat bot that can use a document store and pull context from the web to answer questions.
In the comments, you mentioned that you are struggling to make the bot work like GPT and remember previous conversations. Another user suggested looking at a repository that uses Langchain to read PDF files into a Pinecone index. You also tried a different approach using chat-conversational-react-description
, but still had issues with holding a history. Another user asked if you managed to get the Agent.executor with VectorStore and Memory to work, and you mentioned joining the alpha program for GPT plugins. The conversation continued with you and another user discussing your progress and the issues you are facing.
Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your understanding and we look forward to hearing from you soon!
Hi, I've been playing around with Langchain and GPT-4, building some chat tools, and I was wondering how I can integrate agent tools like calculator and search into ConversationalRetrievalQAChain. The documentation has been quite confusing, and I've been attempting to decipher many of the classes. Here is a version of ConversationalRetrievalQAChain I made, but it doesn't work properly; it constantly returns, "I don't know how to do that". If anyone has some insights into this, I would highly appreciate it! I'm trying to make a chat bot that can look at our document store as well as pull context from the web if needed to answer questions.