Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.73k
stars
1.27k
forks
source link
Upgrade dependency for Windows LNL/ARL support #12424
Description
https://github.com/analytics-zoo/nano/issues/1749#issuecomment-2490658294
Upgrade dependency for Windows LNL/ARL support
PR validation: https://github.com/intel-analytics/ipex-llm-workflow/actions/runs/11950878848