Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
pyspark 2.4.6 currently block many security scan when users integrate bigdl to their product. We are releasing bigdl-chronos, bigdl-orca and bigdl-dllib based on pyspark 2.4.6. We could upgrade these lib to pyspark 3.1.3 as we do in xxx-spark3
pyspark 2.4.6 currently block many security scan when users integrate bigdl to their product. We are releasing
bigdl-chronos
,bigdl-orca
andbigdl-dllib
based on pyspark 2.4.6. We could upgrade these lib to pyspark 3.1.3 as we do inxxx-spark3