withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
960 stars 91 forks source link

Error: Command npm run -s node-gyp-llama -- configure --arch=arm64 --target=v18.0.0 exited with code 1 #26

Closed loretoparisi closed 1 year ago

loretoparisi commented 1 year ago

I'm running the example script provided in the README.md using a local/offline copy of this library (that should work fine). I get this error when calling the script the first time. I'm not using any specific env, the llama.cpp has been downloaded in the folder of the bindings under llama/llama.cpp:

.
├── example.js
├── lib
│   ├── AbortError.d.ts
│   ├── AbortError.js
│   ├── AbortError.js.map
│   ├── ChatPromptWrapper.d.ts
│   ├── ChatPromptWrapper.js
│   ├── ChatPromptWrapper.js.map
│   ├── chatWrappers
│   ├── cli
│   ├── commands.d.ts
│   ├── commands.js
│   ├── commands.js.map
│   ├── config.d.ts
│   ├── config.js
│   ├── config.js.map
│   ├── index.d.ts
│   ├── index.js
│   ├── index.js.map
│   ├── llamaEvaluator
│   ├── package.json
│   ├── types.d.ts
│   ├── types.js
│   ├── types.js.map
│   └── utils
└── llama
    ├── addon.cpp
    ├── binariesGithubRelease.json
    ├── binding.gyp
    ├── llama.cpp
    └── usedBin.json

The example script was

import {fileURLToPath} from "url";
import path from "path";
import {LlamaModel, LlamaContext, LlamaChatSession} from "./lib/index.js";

const __dirname = path.dirname(fileURLToPath(import.meta.url));

const model = new LlamaModel({
    modelPath: path.join(__dirname, "models", "LaMA-2-7B-32K_GGUF", "LLaMA-2-7B-32K-Q3_K_L.gguf")
});
const context = new LlamaContext({model});
const session = new LlamaChatSession({context});

const q1 = "Hi there, how are you?";
console.log("User: " + q1);
giladgd commented 1 year ago

@loretoparisi What is the error that you get? Please also provide the nodejs version, OS type and OS version that you have, and the version of node-llama-cpp that you're using.

Also, your use of node-llama-cpp seems incorrect as you shouldn't clone this repo to use it, and instead install it as a package from npm as detailed in the README.md file.

destinatus commented 1 year ago

First off, thank you. Excellent work.

I'm also getting this error when trying use CUDA.

npx node-llama-cpp download --cuda Debugger attached. Debugger attached. Repo: ggerganov/llama.cpp Release: b1154 CUDA: enabled

✔ Fetched llama.cpp info ✔ Removed existing llama.cpp directory Cloning llama.cpp Clone ggerganov/llama.cpp 100% ████████████████████████████████████████ 0s ✔ Generated required files Compiling llama.cpp Debugger attached. Debugger attached. Waiting for the debugger to disconnect... Waiting for the debugger to disconnect... cli.js download

Download a release of llama.cpp and compile it

Options: -h, --help Show help [boolean] --repo The GitHub repository to download a release of llama.cpp from. Can also be set via the NODE_LLAMA_CPP_REPO environment variable [string] [default: "ggerganov/llama.cpp"] --release The tag of the llama.cpp release to download. Set to "latest" to download t he latest release. Can also be set via the NODE_LLAMA_CPP_REPO_RELEASE envi ronment variable [string] [default: "b1154"] -a, --arch The architecture to compile llama.cpp for [string] -t, --nodeTarget The Node.js version to compile llama.cpp for. Example: v18.0.0 [string] --cuda Compile llama.cpp with CUDA support. Can also be set via the NODE_LLAMA_CPP _CUDA environment variable [boolean] [default: false] --skipBuild, --sb Skip building llama.cpp after downloading it [boolean] [default: false] -v, --version Show version number [boolean]

Error: Command npm run -s node-gyp-llama -- configure --arch=x64 --target=v18.17.1 exited with code 1 at ChildProcess. (file:///.../node_modules/node-llama-cpp/dist/utils/spawnCommand.js:27:24) at ChildProcess.emit (node:events:514:28) at cp.emit ...\node_modules\cross-spawn\lib\enoent.js:34:29) at ChildProcess._handle.onexit (node:internal/child_process:291:12) at Process.callbackTrampoline (node:internal/async_hooks:130:17) Waiting for the debugger to disconnect... Waiting for the debugger to disconnect...

OS: Windows 11 Node version: 18.17.1 CUDA version: V11.3.58

giladgd commented 1 year ago

I have released a new version of node-llama-cpp that uses cmake instead of node-gyp, try upgrading to it and building again as I think it may solve your issue

giladgd commented 1 year ago

Closed due to inactivity, as I assume this issue was fixed as part of #37

destinatus commented 1 year ago

Sorry. Yes my issue is resolved.