intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.72k stars 1.26k forks source link

XEON and MAX with Kernel 5.15 configuration #11170

Open weiseng-yeap opened 5 months ago

weiseng-yeap commented 5 months ago

Team,

Currently we using ubuntu server 22.04 and kernel is 5.15.

Can provide which OneAPI version and GPU driver version work with latest IPEX framework?

Thanks!

qiuxin2012 commented 5 months ago

Please follow https://dgpu-docs.intel.com/driver/installation.html to install GPU driver.

OneAPI: https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html#install-oneapi