antimatter15 / alpaca.cpp

Locally run an Instruction-Tuned Chat-Style LLM
MIT License
10.25k stars 910 forks source link

zsh: illegal hardware instruction ( ./chat ) #185

Open enokseth opened 1 year ago

enokseth commented 1 year ago

┌──(wakelock1㉿kali)-[~/Lamma] └─$ sudo ./chat

main: seed = 1680361550 llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ... llama_model_load: ggml ctx size = 6065.34 MB zsh: illegal hardware instruction sudo ./chat

kaneg commented 1 year ago

I guess you're using a m1/m2 mac. If so, you can compile it from source by commands below: arch -arm64 make then you will get a chat for arm64 cpu.

0x0OZ commented 1 year ago

I guess you're using a m1/m2 mac. If so, you can compile it from source by commands below: arch -arm64 make then you will get a chat for arm64 cpu.

I am having the same issue running on dell, it's old model tho

❯ uname -srmvo
Linux 6.2.8-1-MANJARO #1 SMP PREEMPT_DYNAMIC Wed Mar 22 21:14:50 UTC 2023 x86_64 GNU/Linux
❯ ./chat
main: seed = 1681139515
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.34 MB
[1]    1229464 illegal hardware instruction (core dumped)  ./chat
enokseth commented 1 year ago

I guess you're using a m1/m2 mac. If so, you can compile it from source by commands below: arch -arm64 make then you will get a chat for arm64 cpu.

Hy thanks for response bro this is my laptop, sorry my keyboard is icomplete

wakelock1@kali

OS: Kali GNU/Linux Rolling x Host: MacBookAir4,2 1.0 Kernel: 6.1.0-kali5-amd64 Uptime: 25 mins Packages: 3883 (dpkg) Shell: zsh 5.9 Resolution: 1440x900 DE: GNOME 43.1 WM: Mutter WM Theme: Kali-Dark Theme: Kali-Dark [GTK2/3] Icons: Flat-Remix-Blue-Dark
Terminal: gnome-terminal CPU: Intel i5-2557M (4) @ 2. GPU: Intel 2nd Generation Co Memory: 1564MiB / 3831MiB

enokseth commented 1 year ago

I have just seen that these probably because of the version of python3 I have in 3.11 in fact on the old computers it is necessary to go back to 3.9 because lighter and compatible with the "Torch" module which is not compatible in the version of python3 downgrade Python, and follow tutorials in llama.cpp at https://github.com/ggerganov/llama.cpp this is the way and when hy try to convert models in write directory with python script, my bash say ,

(wakelock1㉿kali)-[~/llama.cpp] └─$ python3 convert-pth-to-ggml.py models/7B/ 1 Traceback (most recent call last): File "/home/wakelock1/llama.cpp/convert-pth-to-ggml.py", line 23, in import torch ModuleNotFoundError: No module named 'torch'

BonaBobo commented 1 year ago

I have the same issue. I am using macbook air 2017.

enokseth commented 1 year ago

I have the same issue. I am using macbook air 2017.

Yes is just Version of Python3 Python3.9 is not comaptible with Numpy Scipy not work with this version, try to downgrade or using venv and another version donwgraded of python3 ``