Open EMPERORAYUSH opened 1 month ago
hi @EMPERORAYUSH, could you please run https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/scripts to check system environment and reply us the output?
(llm-cpp) C:\AYUSH PANDEY\xpu-smi-1.2.38-20240718.060120.0db09695_win>.\env-check
Python 3.11.9
-----------------------------------------------------------------
transformers=4.43.3
-----------------------------------------------------------------
torch=2.2.0+cpu
-----------------------------------------------------------------
Name: ipex-llm
Version: 2.1.0b20240802
Summary: Large Language Model Develop Toolkit
Home-page: https://github.com/intel-analytics/ipex-llm
Author: BigDL Authors
Author-email: bigdl-user-group@googlegroups.com
License: Apache License, Version 2.0
Location: C:\Users\ayush\miniforge-pypy3\envs\llm-cpp\Lib\site-packages
Requires:
Required-by:
-----------------------------------------------------------------
IPEX is not installed properly.
-----------------------------------------------------------------
-----------------------------------------------------------------
Traceback (most recent call last):
File "C:\AYUSH PANDEY\xpu-smi-1.2.38-20240718.060120.0db09695_win\check.py", line 179, in <module>
main()
File "C:\AYUSH PANDEY\xpu-smi-1.2.38-20240718.060120.0db09695_win\check.py", line 173, in main
check_cpu()
File "C:\AYUSH PANDEY\xpu-smi-1.2.38-20240718.060120.0db09695_win\check.py", line 111, in check_cpu
values = cpu_info[1]
~~~~~~~~^^^
IndexError: list index out of range
-----------------------------------------------------------------
System Information
Host Name: NZXT-CUSTOM
OS Name: Microsoft Windows 11 Home Single Language
OS Version: 10.0.22631 N/A Build 22631
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
Registered Owner: HP
Registered Organization: HP
Product ID: 00327-35901-64212-AAOEM
Original Install Date: 14-03-2024, 12:13:12
System Boot Time: 04-08-2024, 17:02:13
System Manufacturer: HP
System Model: HP Laptop 15q-ds3xxx
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
[01]: Intel64 Family 6 Model 126 Stepping 5 GenuineIntel ~1190 Mhz
BIOS Version: Insyde F.36, 03-02-2021
Windows Directory: C:\WINDOWS
System Directory: C:\WINDOWS\system32
Boot Device: \Device\HarddiskVolume1
System Locale: hi;Hindi
Input Locale: 00004009
Time Zone: (UTC+05:30) Chennai, Kolkata, Mumbai, New Delhi
Total Physical Memory: 7,974 MB
Available Physical Memory: 1,792 MB
Virtual Memory: Max Size: 20,262 MB
Virtual Memory: Available: 13,770 MB
Virtual Memory: In Use: 6,492 MB
Page File Location(s): C:\pagefile.sys
Domain: WORKGROUP
Logon Server: N/A
Hotfix(s): 4 Hotfix(s) Installed.
[01]: KB5037591
[02]: KB5027397
[03]: KB5040442
[04]: KB5039338
Network Card(s): 2 NIC(s) Installed.
[01]: Realtek RTL8821CE 802.11ac PCIe Adapter
Connection Name: Wi-Fi
DHCP Enabled: Yes
DHCP Server: 192.168.115.69
IP address(es)
[01]: 192.168.115.162
[02]: Bluetooth Device (Personal Area Network)
Connection Name: Bluetooth Network Connection
Status: Media disconnected
Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed.
-----------------------------------------------------------------
Error: Level Zero Initialization Error
xpu-smi is not installed properly.
I installed one api and ran this command while being in the llm-cpp enviorment and folder
In your output, it shows Error: Level Zero Initialization Error
, which is the reason why your Ollama cannot run the model. You may refer to this guide to install prerequisites on your device.
@sgwhat So after installation (and completing the complete guide and running the qwen 1.8b model, how can I run it on ollama? Should I clear my miniforge and go again scrach?
@sgwhat
i created a new enviorment, downloaded all the pip packages and then ran the python code. But after importing ipexllm.transformers, i saw this error immidietly (although the import was successful) :
C:\Users\ayush\miniforge-pypy3\envs\llm\Lib\site-packages\intel_extension_for_pytorch\xpu\lazy_init.py:80: UserWarning: XPU Device count is zero! (Triggered internally at C:/Users/arc/ruijie/2.1_RC3/python311/frameworks.ai.pytorch.ipex-gpu/csrc/gpu/runtime/Device.cpp:127.) _C._initExtension()
Also, when i set the tensor_1:
tensor_1 = torch.randn(1, 1, 40, 128).to('xpu')
Python just crashes (goes out of the python interactive shell) automatically
Have you downloaded and installed the GPU driver from the official Intel download page?
@sgwhat i have a UHD graphics so if i download ARC or Iris Xe drivers, my motherboard would brick. Thats why i didnt download the drivers. Also, it was optional as written in the docs
You may need to install the GPU driver from official Intel download page, and it works on UHD graphics after our verification. Please install the latest version of the GPU driver as required, while optional
only refers to upgrading the driver.
I also ran into this problem and read posts above. I tried installing the latest version (32.0.101.5768) of GPU driver from URL offered by @sgwhat , however the installer said not found any driver that can be installed for this device, installer exit code 8 (translated from Chinese).
My CPU is i5-10400, and the latest driver on Intel Download that support this CPU is 31.0.101.2128, so it means this CPU can not be used by IPEX-LLM for Ollama?
Hi @vinixwu , the 10th generation CPU has not been tested by us, but you may have a try to install the GPU driver and run ollama.
I have setup ipex-llm by following install ipex-llm for llamacpp until step 2 as my main goal was to run ollama on my integrated intel corporation UHD graphics, and 3rd step was example.
Then, from initialise ollama quickstart I initialised ollama and set the enviorment variables:
and then served ollama: ollama serve
Now, after serving ollama, i saw this output in terminal:
Now, when i try to run any model, for example tinyllama, i see this output in miniforge prompt (where ollama is running):
Above is the continuation of the previous output
And in the terminal where i tried to run the model, by running:
ollama run tinyllama
I see this:
Error: llama runner process has terminated: exit status 0xc0000409
Please help me fix this issue!