Open OvaltineSamuel opened 5 months ago
Hi @OvaltineSamuel , A normal output log on MTL look likes:
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
[SYCL] call ggml_init_sycl
ggml_init_sycl: GGML_SYCL_DEBUG: 0
ggml_init_sycl: GGML_SYCL_F16: no
found 4 SYCL devices:
| | | | |Max | |Max |Global | |
| | | | |compute|Max work|sub |mem | |
|ID| Device Type| Name|Version|units |group |group|size | Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]| Intel Arc Graphics| 1.3| 112| 1024| 32| 15482M| 1.3.29283|
| 1| [opencl:gpu:0]| Intel Arc Graphics| 3.0| 112| 1024| 32| 15482M| 31.0.101.5534|
| 2| [opencl:cpu:0]| Intel Core Ultra 5 125H| 3.0| 18| 8192| 64| 33945M|2023.16.12.0.12_195853.xmain-hotfix|
| 3| [opencl:acc:0]| Intel FPGA Emulation Device| 1.2| 18|67108864| 64| 33945M|2023.16.12.0.12_195853.xmain-hotfix|
ggml_backend_sycl_set_mul_device_mode: true
detect 1 SYCL GPUs: [0] with top Max compute units:112
llm_load_tensors: ggml ctx size = 0.37 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
It seems your program can't find sycl device and then raise this error. Could you please let us know your iGPU driver version ?
Yes, I'm using the latest Driver for MTL iGPU
Please run the ENV-Check script in https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/scripts
Hi @OvaltineSamuel I have verified that on our local MTL, this driver version works:
Could you please provide us more env details with https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/scripts and also show us your output of ls-sycl-device
?
So do I need to install OneAPI basetoolkit for using ipex-llm in this case?
So do I need to install OneAPI basetoolkit for using ipex-llm in this case?
You don't need to install OneAPI basetoolkit by yourself.
When you pip install --pre --upgrade ipex-llm[cpp]
, oneapi 2024.0 has already been installed in your llm-cpp
env, please make sure always run your program in llm-cpp
env.
And I encounter problem like this too, when I follow the guide and run sycl-ls
here https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html
ls-sycl-device
is :"ls-sycl-device: command not found"
the results of running " ENV-Check script" are as follow
bash env-check.shOperating System: Ubuntu 22.04.4 LTS \n \l
CLI: Version: 1.2.35.20240425 Build ID: 00000000
SYCL Exception encountered: Native API failed. Native API returns: -30 (PI_ERROR_INVALID_VALUE) -30 (PI_ERROR_INVALID_VALUE)
Hi @JJJohnathan We have not verified on kernel 6.7.1. Below is a sample output on our Linux MTL:
Maybe you can try with kernel 6.5 following https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html#for-linux-kernel-6-5
@rnwang04 I see. Anyway thx!
Please run the ENV-Check script in https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/scripts
Below is the output of env-check script
Python 3.11.9
-----------------------------------------------------------------
transformers=4.41.2
-----------------------------------------------------------------
torch=2.2.0+cpu
-----------------------------------------------------------------
Name: ipex-llm
Version: 2.1.0b20240607
Summary: Large Language Model Develop Toolkit
Home-page: https://github.com/intel-analytics/ipex-llm
Author: BigDL Authors
Author-email: bigdl-user-group@googlegroups.com
License: Apache License, Version 2.0
Location: C:\Users\Ovalt\miniforge3\envs\ipex-llm_ollama_v2\Lib\site-packages
Requires:
Required-by:
-----------------------------------------------------------------
IPEX is not installed properly.
-----------------------------------------------------------------
Total Memory: 15.376 GB
Chip 0 Memory: 16 GB | Speed: 5600 MHz
-----------------------------------------------------------------
CPU Manufacturer: GenuineIntel
CPU MaxClockSpeed: 3800
CPU Name: Intel(R) Core(TM) Ultra 7 155H
CPU NumberOfCores: 16
CPU NumberOfLogicalProcessors: 22
-----------------------------------------------------------------
GPU 0: Intel(R) Graphics Driver Version: 31.0.101.5522
-----------------------------------------------------------------
-----------------------------------------------------------------
System Information
Host Name: EXPERTBOOK-B5
OS Name: Microsoft Windows 11 Pro
OS Version: 10.0.22631 N/A Build 22631
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
Registered Organization: N/A
Product ID: 00355-61488-69042-AAOEM
Original Install Date: 4/30/2024, 6:54:59 AM
System Boot Time: 6/8/2024, 4:19:15 PM
System Manufacturer: ASUSTeK COMPUTER INC.
System Model: ASUS EXPERTBOOK B5404CMA_B5404CMA
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
[01]: Intel64 Family 6 Model 170 Stepping 4 GenuineIntel ~2280 Mhz
BIOS Version: ASUSTeK COMPUTER INC. (Licensed by AMI, LLC.) B5404CMA.208, 2/7/2024
Windows Directory: C:\Windows
System Directory: C:\Windows\system32
Boot Device: \Device\HarddiskVolume1
System Locale: en-us;English (United States)
Input Locale: en-us;English (United States)
Time Zone: (UTC+08:00) Taipei
Total Physical Memory: 15,745 MB
Available Physical Memory: 6,476 MB
Virtual Memory: Max Size: 42,369 MB
Virtual Memory: Available: 29,067 MB
Virtual Memory: In Use: 13,302 MB
Page File Location(s): C:\pagefile.sys
Domain: WORKGROUP
Logon Server: \\EXPERTBOOK-B5
Hotfix(s): 5 Hotfix(s) Installed.
[01]: KB5037591
[02]: KB5027397
[03]: KB5036212
[04]: KB5037853
[05]: KB5037959
Network Card(s): 3 NIC(s) Installed.
[01]: Intel(R) Wi-Fi 6E AX211 160MHz
Connection Name: Wi-Fi
DHCP Enabled: Yes
DHCP Server: 1.1.1.1
IP address(es)
[01]: 10.174.192.173
[02]: fe80::bbb6:b406:183a:7a63
[02]: Intel(R) Ethernet Connection (18) I219-V
Connection Name: Ethernet
Status: Media disconnected
[03]: Bluetooth Device (Personal Area Network)
Connection Name: Bluetooth Network Connection
Status: Media disconnected
Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed.
-----------------------------------------------------------------
'xpu-smi' is not recognized as an internal or external command,
operable program or batch file.
xpu-smi is not installed properly.
It says IPEX is not installed properly. Also, I can't run ls-sycl-device without having OneAPI basetoolkit installed.
I tried restarting the laptop and reinstalling the whole conda environment again. Run the same env-check.bat with the same output of "IPEX is not installed properly".
Here is a sample output of our Windows MTL:
You can ignore "IPEX is not installed properly" as ipex is not needed for running ipex-llm[cpp].
And ls-sycl-device
doesn't need OneAPI toolkit, it's provided by init-llama-cpp.bat
, you can just go to your cpp directory and run this ls-sycl-device.exe
.
@rnwang04 Thanks for your clarification on that. However, I'm not getting anything when running ls-sycl-device.exe
on my end.
Hi @OvaltineSamuel
On our Windows MTL machine, the output of ls-sycl-device.exe
looks like:
(ruonan-cpp) D:\ruonan\bmk-llama-cpp>ls-sycl-device
found 4 SYCL devices:
| | | | |Max | |Max |Global | |
| | | | |compute|Max work|sub |mem | |
|ID| Device Type| Name|Version|units |group |group|size | Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]| Intel Arc Graphics| 1.3| 112| 1024| 32| 15482M| 1.3.29283|
| 1| [opencl:gpu:0]| Intel Arc Graphics| 3.0| 112| 1024| 32| 15482M| 31.0.101.5534|
| 2| [opencl:cpu:0]| Intel Core Ultra 5 125H| 3.0| 18| 8192| 64| 33945M|2023.16.12.0.12_195853.xmain-hotfix|
| 3| [opencl:acc:0]| Intel FPGA Emulation Device| 1.2| 18|67108864| 64| 33945M|2023.16.12.0.12_195853.xmain-hotfix|
and my pip list is :
Actually your issue is irrelevant with llama.cpp, just your machine can't find sycl device. Sadly, we have not meet this issue before and we can't reproduce this issue... Just a suggestion, maybe you can try to install latest driver 5534 (https://www.intel.com/content/www/us/en/download/785597/intel-arc-iris-xe-graphics-windows.html) ?
Got it, will try it out later and let you know if it works. Thanks a lot.
Currently, I have the driver updated to the latest 5590. However, I'm not getting any output running ls-sycl-device.exe
in the conda env in the llama-cpp folder after initializing init-llama-cpp.bat
. Tried multiple times installing a new conda env and following the quickstart. Not sure what's the problem here.
Error Description
I am encountering the error,
Native API returns: -30 (PI_ERROR_INVALID_VALUE)
, when trying to run llama.cpp with the latest IPEX-LLM, following the official quickstart guide on the IPEX-LLM website: https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/llama_cpp_quickstart.htmlError Output Log
Steps for Error Reproduce
Follow the quickstart guide: create and activate environment and install ipex-llm[cpp] packages.
Set up folder and run init-llama-cpp.bat in administrator in Miniforge Prompt.
Runtime Configuration for Windows.
Run the command below for LLM inference with llama.cpp.
Environment Information