intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.76k stars 1.27k forks source link

release stable version for inference LLM with 2x or 4x Arc A770 #11576

Open Fred-cell opened 4 months ago

Fred-cell commented 4 months ago

support model requirements has provided from CTI project, Dongjie got this table.

glorysdj commented 4 months ago

we will verify the models in the list and plan the stable version release.