Closed loretoparisi closed 1 year ago
@loretoparisi What is the error that you get? Please also provide the nodejs version, OS type and OS version that you have, and the version of node-llama-cpp
that you're using.
Also, your use of node-llama-cpp
seems incorrect as you shouldn't clone this repo to use it, and instead install it as a package from npm as detailed in the README.md
file.
First off, thank you. Excellent work.
I'm also getting this error when trying use CUDA.
npx node-llama-cpp download --cuda Debugger attached. Debugger attached. Repo: ggerganov/llama.cpp Release: b1154 CUDA: enabled
✔ Fetched llama.cpp info ✔ Removed existing llama.cpp directory Cloning llama.cpp Clone ggerganov/llama.cpp 100% ████████████████████████████████████████ 0s ✔ Generated required files Compiling llama.cpp Debugger attached. Debugger attached. Waiting for the debugger to disconnect... Waiting for the debugger to disconnect... cli.js download
Download a release of llama.cpp and compile it
Options: -h, --help Show help [boolean] --repo The GitHub repository to download a release of llama.cpp from. Can also be set via the NODE_LLAMA_CPP_REPO environment variable [string] [default: "ggerganov/llama.cpp"] --release The tag of the llama.cpp release to download. Set to "latest" to download t he latest release. Can also be set via the NODE_LLAMA_CPP_REPO_RELEASE envi ronment variable [string] [default: "b1154"] -a, --arch The architecture to compile llama.cpp for [string] -t, --nodeTarget The Node.js version to compile llama.cpp for. Example: v18.0.0 [string] --cuda Compile llama.cpp with CUDA support. Can also be set via the NODE_LLAMA_CPP _CUDA environment variable [boolean] [default: false] --skipBuild, --sb Skip building llama.cpp after downloading it [boolean] [default: false] -v, --version Show version number [boolean]
Error: Command npm run -s node-gyp-llama -- configure --arch=x64 --target=v18.17.1 exited with code 1
at ChildProcess.
OS: Windows 11 Node version: 18.17.1 CUDA version: V11.3.58
I have released a new version of node-llama-cpp
that uses cmake
instead of node-gyp
, try upgrading to it and building again as I think it may solve your issue
Closed due to inactivity, as I assume this issue was fixed as part of #37
Sorry. Yes my issue is resolved.
I'm running the example script provided in the README.md using a local/offline copy of this library (that should work fine). I get this error when calling the script the first time. I'm not using any specific env, the
llama.cpp
has been downloaded in the folder of the bindings underllama/llama.cpp
:The example script was