langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.16k stars 2.05k forks source link

Error on AWS Lambda #2536

Closed ali-habibzadeh closed 11 months ago

ali-habibzadeh commented 11 months ago

Hi,

I am using

"hnswlib-node": "^2.0.0",
"langchain": "^0.0.144",

All works locally but when deployed to Lambda (node 18) I get:

"Runtime.ImportModuleError: Error: Cannot find module '../stores/doc/in_memory.cjs'",
"Require stack:",
        "- /var/task/node_modules/langchain/dist/vectorstores/hnswlib.cjs",
        "- /var/task/node_modules/langchain/vectorstores/hnswlib.cjs",

Code that produces this error:

import { Injectable } from "@nestjs/common";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { StuffDocumentsChain, loadQAStuffChain } from "langchain/chains";
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { qaTemplate } from "./templates/qa-template";
import { resolve } from "path";

@Injectable()
export class ChatService {
  private initialised = false;
  private vectorStore!: HNSWLib;
  private chain!: StuffDocumentsChain;

  private async init() {
    if (!this.initialised) {
      const dir = resolve(__dirname, "/vectors");
      this.vectorStore = await HNSWLib.load(dir, new OpenAIEmbeddings());
      this.initialised = true;
      this.setChain();
    }
  }

  private async setChain() {
    const prompt = PromptTemplate.fromTemplate(qaTemplate);
    this.chain = loadQAStuffChain(new OpenAI({ modelName: "gpt-3.5-turbo-16k" }), { prompt });
  }

  public async query(question: string): Promise<string> {
    await this.init();
    const input_documents = await this.vectorStore.similaritySearch(question, 10);
    const { text } = await this.chain.call({ input_documents, question });
    return text;
  }
}
dosubot[bot] commented 11 months ago

🤖

Hello,

The error message you're seeing suggests that the AWS Lambda environment is unable to find the in_memory.cjs module, which is a part of the LangChain framework. This module is used for in-memory file storage, and it's typically used for caching files in memory for quick access or during testing.

The issue might be related to the way AWS Lambda handles file paths. When you deploy a Lambda function, your code is uploaded to an S3 bucket, and then it's extracted to the /var/task directory in the Lambda environment. This could potentially cause issues with relative file paths.

Here are a few things you could try:

  1. Check your deployment package: Make sure that the in_memory.cjs file is included in your deployment package and that it's located in the correct directory. You can do this by downloading the deployment package from the S3 bucket and inspecting its contents.

  2. Use absolute file paths: Instead of using relative file paths, try using absolute file paths. You can use the __dirname global variable to get the directory of the current module, and then use the path.join() function to construct the absolute file path.

  3. Check your Node.js version: Make sure that you're using a compatible version of Node.js. The LangChain framework is written in TypeScript, which is transpiled to JavaScript and can be run on any version of Node.js that supports ES6 syntax. However, certain features might not be available in older versions of Node.js. The error message suggests that you're using Node.js 18, which should be compatible, but it might be worth trying a different version to see if that resolves the issue.

If none of these suggestions work, please provide more information about your AWS Lambda environment and how you're deploying your code, and I'll be happy to help you troubleshoot further.

Best, Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.