Open zevlg opened 1 year ago
Same....
Seems like lines 1778 and 1779 of ggml.c should be changed from
const int8x8_t vxlt = vzip1_s8(vxls, vxhs);
const int8x8_t vxht = vzip2_s8(vxls, vxhs);
to
const int8x8_t vxlt = vget_low_s8(vcombine_s8(vxls, vxhs));
const int8x8_t vxht = vget_high_s8(vcombine_s8(vxls, vxhs));
@davidrimshnick I follow your steps to compile the chat model from source code in raspberry pi 4 and successfully compile such model. But it turns out the model can't be loaded into raspberrypi. I wonder if u meet such problem before?
pi@raspberrypi:~/Desktop/alpaca.cpp $ make chat
I llama.cpp build info:
I UNAME_S: Linux
I UNAME_P: unknown
I UNAME_M: armv7l
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mfpu=neon-fp-armv8 -mfp16-format=ieee -mno-unaligned-access -funsafe-math-optimizations
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:
I CC: cc (Raspbian 10.2.1-6+rpi1) 10.2.1 20210110
I CXX: g++ (Raspbian 10.2.1-6+rpi1) 10.2.1 20210110
make: 'chat' is up to date.
pi@raspberrypi:~/Desktop/alpaca.cpp $ ./chat
main: seed = 1680591939
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: failed to open 'ggml-alpaca-7b-q4.bin'
main: failed to load model from 'ggml-alpaca-7b-q4.bin'
pi@raspberrypi:~/Desktop/alpaca.cpp $