langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.29k stars 2.08k forks source link

ConversationalRetrievalQAChain Error #3604

Closed canberkkoc1 closed 9 months ago

canberkkoc1 commented 9 months ago

I am getting this error in the YouTube chatbot => [chain/error] [1:chain:ConversationalRetrievalQAChain] [1ms] Chain run error: "chatMessage._getType is not a function"


let chain;
let chatHistory = [];

const initializeChain = async (initialPrompt, transcriptString) => {
  try {
    const model = new ChatOpenAI({
      modelName: "text-davinci-003",
      temperature: 0.8, 
    });

    const vectorStore = await HNSWLib.fromDocuments(
      [
        {
          pageContent: transcriptString,
        },
      ],
      new OpenAIEmbeddings()
    )

    chain = ConversationalRetrievalQAChain.fromLLM(
      model,
      vectorStore.asRetriever(),
      {
        verbose: true,
      }
    );

    const response = await chain.call({
      question: initialPrompt,
      chat_history: chatHistory,
    });

    chatHistory.push({
      role: "assistant",
      content: response.text,
    });

    return response;
  } catch (error) {
    console.log(error);
  }
};

export async function POST(req, res) {
  const data = await req.json();
  const { prompt, firstMsg } = data;

  if (firstMsg) {
    try {
      const initialPrompt = `Give me a summary of the transcript ${prompt}`;

      chatHistory.push({
        role: "user",
        content: initialPrompt,
      });

      const transcriptResponse = await YoutubeTranscript.fetchTranscript(
        prompt
      );

      if (!transcriptResponse) {
        return NextResponse.json(
          { error: "No transcript found" },
          { status: 400 }
        );
      }

      let transcriptString = "";

      transcriptResponse.forEach((item) => {
        transcriptString += item.text + " ";
      });

      const response = await initializeChain(initialPrompt, transcriptString);

      return NextResponse.json(
        { output: response, chatHistory },
        { status: 200 }
      );
    } catch (error) {
      console.error(error);
      return NextResponse.json({ error: error.message }, { status: 500 });
    }
  } 
}
dosubot[bot] commented 9 months ago

🤖

Hello @canberkkoc1! 👋

I'm Dosu, a friendly bot designed to assist you in navigating through our codebase, debugging issues, and becoming a contributor. While we're waiting for a human maintainer to join us, I'm here to help you out. Looking forward to our coding adventure together!

Based on the error message you're encountering, it seems like there's an issue with the chatMessage._getType function. This function is a method defined in the ChatMessage class in the langchainjs codebase. It is used to get the type of the message. For the ChatMessage class, this method returns the string "generic".

Here is the definition of the _getType function in the ChatMessage class:

/**
 * Represents a chat message in a conversation.
 */
export class ChatMessage
  extends BaseMessage
  implements ChatMessageFieldsWithRole
{
  static lc_name() {
    return "ChatMessage";
  }

  role: string;

  constructor(content: string, role: string);

  constructor(fields: ChatMessageFieldsWithRole);

  constructor(fields: string | ChatMessageFieldsWithRole, role?: string) {
    if (typeof fields === "string") {
      // eslint-disable-next-line no-param-reassign, @typescript-eslint/no-non-null-assertion
      fields = { content: fields, role: role! };
    }
    super(fields);
    this.role = fields.role;
  }

  _getType(): MessageType {
    return "generic";
  }

  static isInstance(message: BaseMessage): message is ChatMessage {
    return message._getType() === "generic";
  }
}

You can find this in the langchain-core/src/messages/index.ts file.

I found a similar issue in the langchainjs repository: message._getType is not a function. The suggested solution was to include returnMessages: true when initializing the AgentExecutorWithOptions. Here is the corrected code:

memory: history
  ? new BufferMemory({
      returnMessages: true,
      memoryKey: "chat_history",
      inputKey: "input",
      outputKey: "output",
      chatHistory: history,
    })
  : undefined,

Please try this solution and let me know if it resolves your issue. If not, could you provide more details about your implementation? Specifically, how you're initializing and using the ConversationalRetrievalQAChain class. This will help me better understand the context and provide a more accurate solution.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

canberkkoc1 commented 9 months ago
 const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
  const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.8,
  });

  const docs = await textSplitter.createDocuments([transcriptString]);

  const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());

  chain = ConversationalRetrievalQAChain.fromLLM(
    model,
    vectorStore.asRetriever(),
    {
      memory: new BufferMemory({
        memoryKey: "chat_history", // Must be set to "chat_history"
      }),
    }
  );

  const response = await chain.call({
    question: initialPrompt,
  });

  return response;

this code fixed issue