Atome-FE / llama-node

Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
https://llama-node.vercel.app/
Apache License 2.0
865 stars 63 forks source link

zsh: illegal hardware instruction #18

Closed loretoparisi closed 1 year ago

loretoparisi commented 1 year ago

I get zsh: illegal hardware instruction when executing with v14.17.3 a inference script like

import { LLama } from "llama-node";
import { LLamaCpp } from "llama-node/dist/llm/llama-cpp.js";
import path from "path";

const model = path.resolve(process.cwd(), "../gpt4all/gpt4all-converted.bin");

const llama = new LLama(LLamaCpp);

const config = {
    path: model,
    enableLogging: true,
    nCtx: 1024,
    nParts: -1,
    seed: 0,
    f16Kv: false,
    logitsAll: false,
    vocabOnly: false,
    useMlock: false,
    embedding: false,
};

llama.load(config);

const template = `How are you`;

const prompt = `### Human:

${template}

### Assistant:`;

llama.createCompletion(
    {
        nThreads: 4,
        nTokPredict: 2048,
        topK: 40,
        topP: 0.1,
        temp: 0.2,
        repeatPenalty: 1,
        stopSequence: "### Human",
        prompt,
    },
    (response) => {
        process.stdout.write(response.token);
    }
);

I'm using

% tsc --version                        
Version 5.0.4
% npm --version
9.6.4
% node --version
v14.17.3
hlhr202 commented 1 year ago

are you using alpine linux? we are currently compiling for GNU linux actually since alpine has a different libc.

loretoparisi commented 1 year ago

are you using alpine linux? we are currently compiling for GNU linux actually since alpine has a different libc.

no I'm on macOS, Apple Silicon.

hlhr202 commented 1 year ago

@loretoparisi This is a bug from CI cross compiling process and I m investigating it. for temporary solution, you need to build it manually. the process is to prepare rust and clone this project, locate packages/llama-cpp and run "npm run build", then collect the binary from @llama-node folder (under packages/llama-cpp)

hlhr202 commented 1 year ago

@loretoparisi I have publish 0.0.26 to fix this temporarily, but you still need to upgrade to node.js 16 or higher version. I will leave this issue open untill I fix the CI pipeline for cross compiling.

loretoparisi commented 1 year ago

@loretoparisi I have publish 0.0.26 to fix this temporarily, but you still need to upgrade to node.js 16 or higher version. I will leave this issue open untill I fix the CI pipeline for cross compiling.

okay thank you! I think node16 is ok. I have problems with WASM modules and node >=17, so just for test it should be fine. Right now I'm still stuck to node 14.7.3 for that reason!

hlhr202 commented 1 year ago

@loretoparisi this issue has been fully resolved since v0.0.27