Open easyfab opened 3 weeks ago
@easyfab Looks like the oneAPI version issue. The binary is built with 2024.1.2.20240508. In the running-time, there should be oneAPI 2024.1.2.20240508 in same time.
So if I understand correctly, we need to modify .devops/llama-server-intel.Dockerfile and .devops/llama-server-intel.Dockerfile ?
ARG ONEAPI_VERSION=2024.1.1-devel-ubuntu22.04 to ~ARG ONEAPI_VERSION=2024.1.2-devel-ubuntu22.04~
Edit : 2024.2.0-1-devel-ubuntu22.04 or 2024.2.1-0-devel-ubuntu22.04 if I look at https://hub.docker.com/r/intel/oneapi/tags
I am a total beginner with git. Could you or someone else submit a PR please.
For info, for me it's the static build that give this error with a shared build it's ok.
I modify the dockerfile like this : I removed :
-DBUILD_SHARED_LIBS=OFF
and add :
COPY --from=build /app/build/ggml/src/libggml.so /libggml.so COPY --from=build /app/build/src/libllama.so /libllama.so
I can close the issue if I'm the only one with this issue.
I have been chasing this same error with a newer A770 while trying to use LocalAI (which uses llama.cpp) I'll have to see if can reproduce
What happened?
I can't use docker + SYCL when using -ngl >0 With -ngl 0 it's ok
message error : No kernel named _ZTSZZL17rms_norm_f32_syclPKfPfiifPN4sycl3_V15queueEiENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nditemILi3EEEE was found -46 (PI_ERROR_INVALID_KERNEL_NAME)Exception caught at file:/app/ggml/src/ggml-sycl.cpp, line:3528
I tried local build or ghcr.io/ggerganov/llama.cpp:light-intel
For info using ipex-llm intelanalytics/ipex-llm-inference-cpp-xpu docker image is ok with HW -ngl 99
Name and Version
ghcr.io/ggerganov/llama.cpp:light-intel
What operating system are you seeing the problem on?
Linux
Relevant log output