langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.29k stars 2.08k forks source link

`error unhandledRejection: Error [TypeError]`: text.replace is not a function #4736

Closed DaveOkpare closed 6 months ago

DaveOkpare commented 6 months ago

Checked other resources

Example Code

import { RunnableSequence, RunnablePassthrough, } from "@langchain/core/runnables"; import { StringOutputParser } from "@langchain/core/output_parsers"; import { Configuration, OpenAIApi } from 'openai-edge' import { Pinecone } from "@pinecone-database/pinecone"; import { PineconeStore } from '@langchain/pinecone' import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai"; import { formatDocumentsAsString } from "langchain/util/document"; import { PromptTemplate } from '@langchain/core/prompts' import { QA_PROMPT } from '../../config/prompts'

const pinecone = new Pinecone({ apiKey: PINECONE_API_KEY, })

// Set Pinecone index name const pineconeIndex = pinecone.Index(PINECONE_INDEX_NAME)

// Get the vectorstore const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex: pineconeIndex, textKey: 'text', namespace: PINECONE_NAME_SPACE, } )

const retreiver = vectorStore.asRetriever() const prompt = PromptTemplate.fromTemplate(QA_PROMPT) const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 });

    const ragChain = RunnableSequence.from([
          {
            context: retreiver.pipe(formatDocumentsAsString),
            question: new RunnablePassthrough(),
          },
          prompt,
          llm,
          new StringOutputParser(),
        ]);

    const response = ragChain.invoke({question})

Error Message and Stack Trace (if applicable)

Description

I'm trying to do RAG using LCEL but keep running into this error using all the code examples from the docs. My goal is to also return the source documents and stream.

System Info

version: '0.1.27', type: 'module', engines: { node: '>=18' }, main: './index.js', types: './index.d.ts', \

dosubot[bot] commented 6 months ago

Hey @DaveOkpare, great to see you around here again! How have you been?

Based on the error message you provided, it seems like the issue is related to the text.replace function being called on a non-string variable. This error is likely not directly related to the LangChain JS code itself but could be a result of how the responses or inputs are handled, especially when interfacing with external libraries or APIs.

Given the code snippet, there are no explicit calls to text.replace within the provided code. Therefore, the error might be occurring within one of the libraries or methods being used, possibly within the RunnableLambda, StringOutputParser, or even deeper within the LangChain or OpenAI libraries when they process text inputs or outputs.

To troubleshoot this issue, consider the following steps:

  1. Verify Input Types: Ensure that all inputs to functions expected to operate on strings are indeed strings. This is particularly relevant for the input to the RunnableLambda function and the output of retriever.invoke(input). You might want to explicitly convert or check the type of these inputs/outputs to ensure they are strings.

  2. Check Library Methods: Review the documentation or source code of the RunnableLambda, StringOutputParser, and any other library method you're using to see if there's an internal use of text.replace that could be causing this issue when given an unexpected input type.

  3. Debugging: Add console logs before each step in your chain to log the types and values of variables being passed around. This can help identify where the type mismatch occurs.

  4. Review External Changes: If this code worked previously, consider if there have been any updates or changes to the LangChain JS library or any other dependencies that might have introduced this issue.

Without more specific details on where the text.replace method is being called from within the libraries or your code, these general troubleshooting steps are the best approach to identifying and resolving the error.

Regarding your question about the RunnablePassthrough() method, it serves as a utility for passing inputs through a processing chain without altering them, or optionally, adding additional keys to the output if the input is an object. It behaves similarly to the identity function but is designed to work within the context of LangChain's runnable system, allowing for inputs to be passed through or modified in a controlled manner. This is particularly useful in scenarios where the input needs to be enriched or passed along as-is to subsequent steps in a processing chain.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
jacoblee93 commented 6 months ago

The input to ragChain in your case there will be passed directly to retreiver.pipe(formatDocumentsAsString) and new RunnablePassthrough(). Retrievers take string input. Thus, you should invoke your chain like this:

    const ragChain = RunnableSequence.from([
          {
            context: retreiver.pipe(formatDocumentsAsString),
            question: new RunnablePassthrough(),
          },
          prompt,
          llm,
          new StringOutputParser(),
        ]);

  // No brackets around question
    const response = ragChain.invoke(question);

Alternatively, you could update your chain to look like this:

    const ragChain = RunnableSequence.from([
          RunnableLambda.from((input) => input.question),
          {
            context: retreiver.pipe(formatDocumentsAsString),
            question: new RunnablePassthrough(),
          },
          prompt,
          llm,
          new StringOutputParser(),
        ]);

  // Now you can put it in an object
    const response = ragChain.invoke({ question });
fromatlantis commented 5 months ago

The input to ragChain in your case there will be passed directly to retreiver.pipe(formatDocumentsAsString) and new RunnablePassthrough(). Retrievers take string input. Thus, you should invoke your chain like this:

    const ragChain = RunnableSequence.from([
          {
            context: retreiver.pipe(formatDocumentsAsString),
            question: new RunnablePassthrough(),
          },
          prompt,
          llm,
          new StringOutputParser(),
        ]);

  // No brackets around question
    const response = ragChain.invoke(question);

Alternatively, you could update your chain to look like this:

    const ragChain = RunnableSequence.from([
          RunnableLambda.from((input) => input.question),
          {
            context: retreiver.pipe(formatDocumentsAsString),
            question: new RunnablePassthrough(),
          },
          prompt,
          llm,
          new StringOutputParser(),
        ]);

  // Now you can put it in an object
    const response = ragChain.invoke({ question });

How to add chat_history memory? Thanks.

emmadao commented 3 days ago

hi Someone can help me how to add chat history with postgres Here my code:

const retriever = vectorstore.asRetriever();

      const chain = RunnableSequence.from([
        RunnableLambda.from((input) => input.question),
          {
            context: retriever.pipe((docs) => docs[0].pageContent),
            question: new RunnablePassthrough(),
            chat_history: new RunnablePassthrough(), 
          },
          promptTemplate2,
          llm,
          new StringOutputParser(),
      ]);

      const chainWithMessageHistory = new RunnableWithMessageHistory({
        runnable: chain,
        getMessageHistory: async (sessionId) => {
          const chatHistory = new PostgresChatMessageHistory({
            sessionId,
            pool,
          });
          return chatHistory;
        },
        inputMessagesKey: "question",
        historyMessagesKey: "chat_history"
      });

      const chainResult = await chainWithMessageHistory.invoke({
        question: question,
      },
      { configurable: { sessionId: "langchain-test-1" } }
      ); 

But the chat_history doesn't work.