intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.72k stars 1.26k forks source link

DeepSpeed multi-instance AutoTP support on Xeon #9246

Open qiyuangong opened 1 year ago

qiyuangong commented 1 year ago

@Jasonzzt Please try this #9230 on Xeon platform.

jason-dai commented 1 year ago

@glorysdj @qiyuangong please assign someone from the team

qiyuangong commented 1 year ago

@glorysdj @qiyuangong please assign someone from the team

OK. @Uxito-Ada will take this task. :)