intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.62k stars 1.26k forks source link

PyTorch Lightning Accelerator Refactor #3674

Open tchaton opened 2 years ago

tchaton commented 2 years ago

Dear Team from BigDL,

I am part of the PyTorch Lightning Team and found you have a IpexAccelerator: https://github.com/intel-analytics/BigDL/blob/900905a5396d5b1093e798d633ca44398ab8be2c/python/nano/src/bigdl/nano/pytorch/accelerators/ipex_accelerator.py#L37.

I would like to inform you PyTorch Lightning is currently under-going a refactor which is likely to introduce breaking changes for this code around the Accelerators, TrainingType Plugins, and Precision. However, its API should be stable after 1.6

Furthermore, you might want to register your framework to our Ecosystem-CI repo and join PyTorch Lightning Slack to stay tuned to latest information.

jason-dai commented 2 years ago

@tchaton thanks for the information; we'll take a look at https://github.com/PyTorchLightning/ecosystem-ci, and the latest PyTorch Lightning refactor code.