nomic-ai / pygpt4all

Official supported Python bindings for llama.cpp + gpt4all
https://nomic-ai.github.io/pygpt4all/
MIT License
1.02k stars 162 forks source link

pyllamacpp not support M1 chips MacBook #57

Open laihenyi opened 1 year ago

laihenyi commented 1 year ago

Traceback (most recent call last): File "/Users/laihenyi/Documents/GitHub/gpt4all-ui/app.py", line 29, in from pyllamacpp.model import Model File "/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/pyllamacpp/model.py", line 21, in import _pyllamacpp as pp ImportError: dlopen(/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so, 0x0002): tried: '/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (no such file), '/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

covalspace commented 1 year ago

I'm having a similar issue on M1 chip of missing import but pip install requirements already satisfied

shivam-singhal commented 1 year ago

Similar issue, except I just get [1] 79802 illegal hardware instruction python upon running from pyllamacpp.model import Model

NickAnastasoff commented 1 year ago

I'm having a similar issue on M1 chip of missing import but pip install requirements already satisfied

I was having the same issue because I had multiple versions of python. It might be worth a shot to just try changing your python version around some.

laihenyi commented 1 year ago

Any Lucky? days passed, and it seems the developer did not run a fix on this issue.

abdeladim-s commented 1 year ago

Sorry @laihenyi, I don't have a Mac so I couldn't debug the issue. Could you please try to build from source in that case ? it is a straightforward process!

shivam-singhal commented 1 year ago

I encountered 2 problems:

  1. My conda install was for the x86 platform, and I should have instead installed another binary for arm64
  2. Installing from whl (pypi?) was pulling the x86 version, not the arm64 version of pyllamacpp

This ultimately was causing the binary to not be able to link with BLAS, as provided on macs via the accelerate framework (namely, could not find _clbas_sgemm, which is a function for single precision matrix multiplication). In simple terms, a dependency issue because of incompatible platforms.

I found that I had to fix both of the above issues - just fixing one did not work. Now, after a separate conda for arm64, and installing pyllamacpp from source, I am able to run the sample code.

abdeladim-s commented 1 year ago

Thanks so much @shivam-singhal for the solution. I really appreciate it.

laihenyi commented 1 year ago

I encountered 2 problems:

  1. My conda install was for the x86 platform, and I should have instead installed another binary for arm64
  2. Installing from whl (pypi?) was pulling the x86 version, not the arm64 version of pyllamacpp

This ultimately was causing the binary to not be able to link with BLAS, as provided on macs via the accelerate framework (namely, could not find _clbas_sgemm, which is a function for single precision matrix multiplication). In simple terms, a dependency issue because of incompatible platforms.

I found that I had to fix both of the above issues - just fixing one did not work. Now, after a separate conda for arm64, and installing pyllamacpp from source, I am able to run the sample code.

Sorry, I am not a code developer, however, I am an M1 MacBook user. What could I do to help you to fix this bug? remote-connect to my laptop to compile M1 binary?

hsgarcia22 commented 1 year ago

I encountered 2 problems:

  1. My conda install was for the x86 platform, and I should have instead installed another binary for arm64
  2. Installing from whl (pypi?) was pulling the x86 version, not the arm64 version of pyllamacpp

This ultimately was causing the binary to not be able to link with BLAS, as provided on macs via the accelerate framework (namely, could not find _clbas_sgemm, which is a function for single precision matrix multiplication). In simple terms, a dependency issue because of incompatible platforms.

I found that I had to fix both of the above issues - just fixing one did not work. Now, after a separate conda for arm64, and installing pyllamacpp from source, I am able to run the sample code.

This doesn't make sense, I'm not running this in conda, its native python3. What did you modify to correct the original issue, and why is everyone linking this to the pygpt4all import GPT4All when it seems to be a separate issue?