Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
I would like to inform you PyTorch Lightning is currently under-going a refactor which is likely to introduce breaking changes for this code around the Accelerators, TrainingType Plugins, and Precision. However, its API should be stable after 1.6
Furthermore, you might want to register your framework to our Ecosystem-CI repo and join PyTorch Lightning Slack to stay tuned to latest information.
Dear Team from BigDL,
I am part of the PyTorch Lightning Team and found you have a IpexAccelerator: https://github.com/intel-analytics/BigDL/blob/900905a5396d5b1093e798d633ca44398ab8be2c/python/nano/src/bigdl/nano/pytorch/accelerators/ipex_accelerator.py#L37.
I would like to inform you PyTorch Lightning is currently under-going a refactor which is likely to introduce breaking changes for this code around the Accelerators, TrainingType Plugins, and Precision. However, its API should be stable after 1.6
Furthermore, you might want to register your framework to our Ecosystem-CI repo and join PyTorch Lightning Slack to stay tuned to latest information.