Atome-FE / llama-node

Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
https://llama-node.vercel.app/
Apache License 2.0
862 stars 62 forks source link

Can't run the example on MacOS M1 pro #92

Open greenido opened 1 year ago

greenido commented 1 year ago
  1. I tried to run: https://github.com/Atome-FE/llama-node/blob/main/example/js/langchain/langchain.js with ggml-vic7b-q4_0.bin
  2. At 'await llama.load(config);' --> I'm getting: Illegal instruction: 4
  3. I'm using: npm install @llama-node/llama-cpp and alpaca.cpp is working on another example.

Any ideas?