Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.48k
stars
1.24k
forks
source link
[Discussion] Operations needed to be supported in shards #5694
To support better user experience to use orca shards, created this issue to discuss which operations are needed to support in orca shards.
[ ] Scaler
[ ] Encode categorical variables
[x] Merge (join) Has a task https://github.com/orgs/analytics-zoo/projects/14/views/4
[ ] Not (
~
operation in pandas)[ ] statisticas
Above operations are motivated from below links: https://www.kaggle.com/code/pmarcelino/comprehensive-data-exploration-with-python operations used:
https://www.kaggle.com/code/isaienkov/riiid-answer-correctness-prediction-eda-modeling operations used:
https://www.kaggle.com/code/ammar111/youtube-trending-videos-analysis operations used:
https://www.kaggle.com/code/jiashenliu/introduction-to-financial-concepts-and-data operations used: