withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
1.04k stars 95 forks source link

node-llama-cpp not compatible with "@langchain/core": "0.3.13" #367

Closed PeterTucker closed 1 month ago

PeterTucker commented 1 month ago

Issue description

TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined.

Expected Behavior

Load Model.

Actual Behavior

Throws error:

TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined.
    at new LlamaModel (file:///C:/Users/User/Project/langchain-test/node_modules/node-llama-cpp/dist/evaluator/LlamaModel/LlamaModel.js:42:144)
    at createLlamaModel (file:///C:/Users/User/Project/langchain-test/node_modules/@langchain/community/dist/utils/llama_cpp.js:13:12)
    at new LlamaCpp (file:///C:/Users/User/Project/langchain-test/node_modules/@langchain/community/dist/llms/llama_cpp.js:87:23)
    at file:///C:/Users/User/Project/langchain-test/src/server.js:15:17

Steps to reproduce

    import { LlamaCpp } from "@langchain/community/llms/llama_cpp";
    import fs from "fs";

    let llamaPath = "../project/data/llm-models/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf"

    const question = "Where do Llamas come from?";

    if (fs.existsSync(llamaPath)) {
      console.log(`Model found at ${llamaPath}`);

      const model = new LlamaCpp({ modelPath: llamaPath});

      console.log(`You: ${question}`);
      const response = await model.invoke(question);
      console.log(`AI : ${response}`);
    } else {
      console.error(`Model not found at ${llamaPath}`);
    }

error:

TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined.
    at new LlamaModel (file:///C:/Users/User/Project/langchain-test/node_modules/node-llama-cpp/dist/evaluator/LlamaModel/LlamaModel.js:42:144)
    at createLlamaModel (file:///C:/Users/User/Project/langchain-test/node_modules/@langchain/community/dist/utils/llama_cpp.js:13:12)
    at new LlamaCpp (file:///C:/Users/User/Project/langchain-test/node_modules/@langchain/community/dist/llms/llama_cpp.js:87:23)
    at file:///C:/Users/User/Project/langchain-test/src/server.js:15:17

My Environment

command line: npx --yes node-llama-cpp inspect gpu 'nlc' is not recognized as an internal or external command, operable program or batch file.

Using: Windows 10. (though I get this error in WSL as well) Node: v22.9.0 "node-llama-cpp": "^3.1.1"

Additional Context

No response

Relevant Features Used

Are you willing to resolve this issue by submitting a Pull Request?

Yes, I have the time, and I know how to start.

PeterTucker commented 1 month ago

Answer from Langchain dev, "use version 2 not 3", packag.json:

    "dependencies": {
        "node-llama-cpp": "^2"
    },

image