hipudding / llama.cpp

LLM inference in C/C++
MIT License
8 stars 2 forks source link

Make error? #3

Open zmf2022 opened 1 month ago

zmf2022 commented 1 month ago

llama.cpp/ggml-impl.h:283:27: error: implicit declaration of function ‘vld1q_s16_x2’; did you mean ‘vld1q_s16’? [-Werror=implicit-function-declaration]

define ggml_vld1q_s16_x2 vld1q_s16_x2

llama.cpp/ggml-quants.c:4780:41: note: in expansion of macro ‘ggml_vld1q_s16_x2’ const ggml_int16x8x2_t q8sums = ggml_vld1q_s16_x2(y[i].bsums); ^~~~~ llama.cpp/ggml-impl.h:283:27: error: invalid initializer

define ggml_vld1q_s16_x2 vld1q_s16_x2

hipudding commented 3 weeks ago

This PR is still under progress, please wait for release.