nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
70.33k stars 7.68k forks source link

[macOS] typescript bindings: no matching implementation found #1631

Closed FlyingSheep-Cody closed 10 months ago

FlyingSheep-Cody commented 11 months ago

System Info

gpt-js@ /Users/yadu/python-workspace/ai-learning/gpt-js └── gpt4all@3.0.0

MacOS 13.5.2 (22G91)

Node.js v20.9.0

Information

Reproduction

follow the document https://docs.gpt4all.io/gpt4all_typescript.html install npm install gpt4all@latest

and then run the code below in js file import { createCompletion, loadModel } from '../src/gpt4all.js'

const model = await loadModel('gpt4all-falcon-q4_0', { verbose: true });

const response = await createCompletion(model, [ { role : 'system', content: 'You are meant to be annoying and unhelpful.' }, { role : 'user', content: 'What is 1 + 1?' } ]);

return error: Found gpt4all-falcon-q4_0 at /Users/yadu/.cache/gpt4all/gpt4all-falcon-q4_0.gguf Creating LLModel with options: { model_name: 'gpt4all-falcon-q4_0.gguf', model_path: '/Users/yadu/.cache/gpt4all', library_path: '/Users/yadu/python-workspace/ai-learning', device: 'cpu' } /Users/yadu/python-workspace/ai-learning/gpt-js/node_modules/gpt4all/src/gpt4all.js:71 const llmodel = new LLModel(llmOptions); ^

Error: Model format not supported (no matching implementation found) at loadModel (/Users/yadu/python-workspace/ai-learning/gpt-js/node_modules/gpt4all/src/gpt4all.js:71:21) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///Users/yadu/python-workspace/ai-learning/gpt-js/test%20copy.js:4:15

Node.js v20.9.0


I also tried gpt4all 2.2.0 with ggml-vicuna-7b-1.1-q4_2 bin model, same issue.

Can anybody help? thanks!

Expected behavior

pass the code, and return the answer.

cebtenzzre commented 11 months ago

What CPU do you have? I believe there is a known issue with the typescript bindings on M1/M2 Macs right now.

FlyingSheep-Cody commented 11 months ago

@cebtenzzre Apple M1 Pro, 32 G.

However, it also does not work well on my Desktop, which is Windows system.

chervox commented 11 months ago

+1

DePasqualeOrg commented 11 months ago

I'm getting this error on an M3 MacBook Pro with 16 GB of RAM.

scarabaeus commented 11 months ago

+1 on any model that I try to run. For example:

Creating LLModel with options: {
  model_name: 'ggml-vicuna-7b-1.1-q4_2.bin',
  model_path: '/Users/<user_name>/.cache/gpt4all',
  library_path: '/Users/<user_name>/dev/gpt4all-node'
}
Error: Model format not supported (no matching implementation found)
    at loadModel (/Users/<user_name>/dev/gpt4all-node/node_modules/gpt4all/src/gpt4all.js:69:21)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at /Users/<user_name>/dev/gpt4all-node/src/main.ts:7:17
[ERROR] 11:12:58 Error: Model format not supported (no matching implementation found)

Running:

Chip: Apple M1 Pro
Memory: 32 GB
macOS: 14.0
node --version: v18.6.0
scarabaeus commented 11 months ago

Update:

This particular error, in my case was a RTFM user error on my part. I was able to get past it by actually not ignoring instructions and copying the .dylib files generated after building the backend to my node project's root folder.

This will build platform-dependent dynamic libraries, and will be located in runtimes/(platform)/native The only current way to use them is to put them in the current working directory of your application. That is, WHEREVER YOU RUN YOUR NODE APPLICATION

I am now, however, stuck at the error:

 magic_match: gguf_init_from_file failed
 /Users/<user_name>/dev/gpt4all-node/node_modules/gpt4all/src/gpt4all.js:70
     const llmodel = new LLModel(llmOptions);
                     ^
 Error: No such file or directory
     at loadModel (/Users/<user_name>/dev/gpt4all-node/node_modules/gpt4all/src/gpt4all.js:70:21)
     at processTicksAndRejections (node:internal/process/task_queues:95:5)
     at async /Users/<user_name>/dev/gpt4all-node/src/main.ts:7:17

I'll report back if I've missed more instructions and/or have resolved my issue.

jacoobes commented 11 months ago

Hi, due to in real life circumstances, i've been unable to actively maintain this package. That means I try to keep an eye on it but its not high priority. But i think I know the fix: In the circle ci script, I asked @cebtenzzre about it and apparently the bindings are being built for for mac AMD x64, but not arm and m1 chips. This means when prebuildify tries to search for dependencies, it cannot find the shared library modules that gpt4all backend produces. I think this can be solved by building from source. Building from source entails these instructions. Let me know if this solves this. In the meantime this is probably the workaround. I've been experimenting with some cross compilation via zig build but stopped due to incompatabilities in ABI's. (zig cc cannot build msvc, only mingw) maybe ill experiment with xmake.

jacoobes commented 10 months ago

mac m1 users should be fixed