hi, I'm trying to get CUDA support running. my system is running:
windows 11
ryzen 7 5800
RTX 3090
CUDA ver 12.4, V12.4.131
When i tried to run the command in the documentation:
npx --no node-llama-cpp download --cuda
I get this:
Repo: ggerganov/llama.cpp
Release: b2487
CUDA: enabled
✔ Removed existing llama.cpp directory
Cloning llama.cpp
Clone ggerganov/llama.cpp (local bundle) 0% ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0s
Failed to clone git bundle, cloning from GitHub instead GitError: Error: spawn git ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:285:19)
at onErrorNT (node:internal/child_process:483:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
at Object.action (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:4412:25)
at PluginStore.exec (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:4451:25)
at file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:1363:43
at new Promise (<anonymous>)
at GitExecutorChain.handleTaskData (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:1361:16)
at GitExecutorChain.<anonymous> (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:1345:44)
at Generator.next (<anonymous>)
at fulfilled (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:45:24) {
task: {
commands: [
'clone',
'--quiet',
'E:\\Github\\AIAssistant\\AIAssistant\\backend\\node_modules\\node-llama-cpp\\llama\\gitRelease.bundle',
'E:\\Github\\AIAssistant\\AIAssistant\\backend\\node_modules\\node-llama-cpp\\llama\\llama.cpp'
],
format: 'utf-8',
parser: [Function: parser]
}
}
Clone ggerganov/llama.cpp (GitHub) 0% ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0s
node-llama-cpp download
Download a release of llama.cpp and compile it
Options:
-h, --help Show help [boolean]
--repo The GitHub repository to download a release of llama.cpp from. Can also be
set via the NODE_LLAMA_CPP_REPO environment variable
[string] [default: "ggerganov/llama.cpp"]
--release The tag of the llama.cpp release to download. Set to "latest" to download t
he latest release. Can also be set via the NODE_LLAMA_CPP_REPO_RELEASE envi
ronment variable [string] [default: "b2487"]
-a, --arch The architecture to compile llama.cpp for [string]
-t, --nodeTarget The Node.js version to compile llama.cpp for. Example: v18.0.0 [string]
--cuda Compile llama.cpp with CUDA support. Can also be set via the NODE_LLAMA_CPP
_CUDA environment variable [boolean] [default: false]
--skipBuild, --sb Skip building llama.cpp after downloading it [boolean] [default: false]
--noBundle, --nb Download a llama.cpp release only from GitHub, even if a local git bundle e
xists for the release [boolean] [default: false]
-v, --version Show version number [boolean]
GitError: Error: spawn git ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:285:19)
at onErrorNT (node:internal/child_process:483:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
at Object.action (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:4412:25)
at PluginStore.exec (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:4451:25)
at file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:1363:43
at new Promise (<anonymous>)
at GitExecutorChain.handleTaskData (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:1361:16)
at GitExecutorChain.<anonymous> (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:1345:44)
at Generator.next (<anonymous>)
at fulfilled (file:///E:/Github/AIAssistant/AIAssistant/backend/node_modules/simple-git/dist/esm/index.js:45:24) {
task: {
commands: [
'clone',
'--depth=1',
'--branch=b2487',
'--quiet',
'https://github.com/ggerganov/llama.cpp.git',
'E:\\Github\\AIAssistant\\AIAssistant\\backend\\node_modules\\node-llama-cpp\\llama\\llama.cpp'
],
format: 'utf-8',
parser: [Function: parser]
}
}```
any thoughts on what I'm doing wrong? thanks in advance.
hi, I'm trying to get CUDA support running. my system is running:
When i tried to run the command in the documentation:
npx --no node-llama-cpp download --cuda
I get this: