Closed ayttop closed 2 weeks ago
How do I upload on igpu
C:\Users\ArabTech\Desktop\5\LlamaCppExe>sycl-ls.exe [opencl:fpga][opencl:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2024.18.7.0.11_160000] [opencl:cpu][opencl:1] Intel(R) OpenCL, Intel(R) Core(TM) i7-14700K OpenCL 3.0 (Build 0) [2024.18.7.0.11_160000] [opencl:gpu][opencl:2] Intel(R) OpenCL Graphics, Intel(R) UHD Graphics 770 OpenCL 3.0 NEO [31.0.101.4577] [level_zero:gpu][level_zero:0] Intel(R) Level-Zero, Intel(R) UHD Graphics 770 1.3 [1.3.26561]
May be have a look her: https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md
:crossed_fingers:
Using the Vulkan version seems to be more hassle-free.
thank you igpu intel work on llama cpp only
not work on llama cpp python
@ayttop "warning: not compiled with GPU offload support, --gpu-layers option will be ignored" looks like the compile is without GPU support. The workload is executed on CPU in fact.
Could you refer to https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md
it is run on llama cpp but not run on llama cpp python
This issue was closed because it has been inactive for 14 days since being marked as stale.
What happened?
C:\Users\ArabTech\Desktop\5\LlamaCppExe>C:/Users/ArabTech/Desktop/5\LlamaCppExe/llama-cli -m C:/Users/ArabTech/Desktop/5/phi-3.5-mini-instruct-q4_k_m.gguf -p "Who is Napoleon Bonaparte?" --gpu-layers 30 --no-mmap -t 2 warning: not compiled with GPU offload support, --gpu-layers option will be ignored warning: see main README.md for information on enabling GPU BLAS support Log start main: build = 3618 (3ba780e2)
Name and Version
last
What operating system are you seeing the problem on?
No response
Relevant log output