intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.31k stars 1.23k forks source link

failed to run gemma example on wsl #10259

Open lidh15 opened 4 months ago

lidh15 commented 4 months ago

I created the virtual enviroment with python3.9 as the example suggested, but the environment configuration pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu failed because pip could not find a suitable version.

qiuxin2012 commented 4 months ago

What's the detail error message? Could you check your network configuration and pip mirror? Make sure bigdl-llm >= 2.5.0b20240226 is in your pip mirror, and https://developer.intel.com/ipex-whl-stable-xpu can be connected.

lidh15 commented 4 months ago

What's the detail error message? Could you check your network configuration and pip mirror? Make sure bigdl-llm >= 2.5.0b20240226 is in your pip mirror, and https://developer.intel.com/ipex-whl-stable-xpu can be connected.

well, there is no version later than 0225 in my mirror. however, I found that bigdl-llm[default] in fact worked.

qiuxin2012 commented 4 months ago

Ha, I just try bigdl-llm[default] only installed bigdl-llm, no dependencies. Maybe you encounter network issues when installing IPEX, you can try this https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel. Version 0225 is OK. I just notice you are using wsl, wsl is not tested, I'm not sure if it works fine. You can have a try. We recommend to use windows directly. See https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#windows.