I'm trying to run Wizard-Vicuna-13B-Uncensored model on a VM (16GB RAM), but i'm getting the below error:
Error: Missing field nGpuLayers
at LLamaCpp. (file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:63:35)
at Generator.next ()
at file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:33:61
at new Promise ()
at __async (file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:17:10)
at LLamaCpp.load (file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:61:12)
at LLM.load (/usr/local/lib/node_modules/catai/node_modules/llama-node/dist/index.cjs:52:21)
at #addNew (file:///usr/local/lib/node_modules/catai/src/alpaca-client/node-llama/process-pull.js:88:21)
at new NodeLlamaActivePull (file:///usr/local/lib/node_modules/catai/src/alpaca-client/node-llama/process-pull.js:19:38)
at file:///usr/local/lib/node_modules/catai/src/alpaca-client/node-llama/node-llama.js:8:48 {
code: 'InvalidArg'
}
I'm trying to run Wizard-Vicuna-13B-Uncensored model on a VM (16GB RAM), but i'm getting the below error:
Error: Missing field (file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:63:35)
at Generator.next ()
at file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:33:61
at new Promise ()
at __async (file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:17:10)
at LLamaCpp.load (file:///usr/local/lib/node_modules/catai/node_modules/llama-node/dist/llm/llama-cpp.js:61:12)
at LLM.load (/usr/local/lib/node_modules/catai/node_modules/llama-node/dist/index.cjs:52:21)
at #addNew (file:///usr/local/lib/node_modules/catai/src/alpaca-client/node-llama/process-pull.js:88:21)
at new NodeLlamaActivePull (file:///usr/local/lib/node_modules/catai/src/alpaca-client/node-llama/process-pull.js:19:38)
at file:///usr/local/lib/node_modules/catai/src/alpaca-client/node-llama/node-llama.js:8:48 {
code: 'InvalidArg'
}
nGpuLayers
at LLamaCpp.