intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.56k stars 1.25k forks source link

Chronos: dependencies should be cleaned and refactored #4791

Open TheaperDeng opened 2 years ago

TheaperDeng commented 2 years ago

Updated 7/13/2022 according to comments

Please use this demo page to understand what prompt we will give our users. https://theaperdeng.github.io/complex-installation-document-panel/

Motivation

bigdl-chronos's dependencies is becomming more and more unmanagable under current limited installation options(default and all), especially when we start to support 2 frameworks(torch and tf).

Install Options

Which option should I Install

dl backend/ml models automl distributed install option comment
pytorch no no bigdl-chronos[pytorch]
pytorch with inference optimizations no no bigdl-chronos[pytorch] + onnx, onnxruntime prompt onnxruntime and onnx install cmd
pytorch yes no bigdl-chronos[pytorch, automl]
pytorch no yes bigdl-chronos[pytorch, distributed]
pytorch yes yes bigdl-chronos[pytorch, distributed, automl]
tensorflow no no bigdl-chronos[tensorflow]
tensorflow yes no bigdl-chronos[tensorflow,automl]
tensorflow no yes bigdl-chronos[tensorflow,distributed]
tensorflow yes yes bigdl-chronos[tensorflow, distributed, automl]
prophet, arima yes/no no bigdl-chronos[automl] + prophet/pmdarima prompt prophet/pmdarima install cmd
dbscan no no bigdl-chronos
I want all models work fine! yes yes bigdl-chronos[all]

Dependencies details in each option

install option dependencies
[] pandas, scikit-learn
[pytorch] +bigdl-nano[pytorch]
[automl] +optuna, configspace
[tensorflow] +bigdl-nano[tensorflow]
[distributed] +bigdl-orca[automl]
[all] +pmdarima, prophet, tsfresh, pyarrow, light-weight hpo dependencies, onnx, onnxruntime, openvino, neural-compressor, optuna, configspace, bigdl-nano[tensorflow], bigdl-orca[automl]
TheaperDeng commented 2 years ago

@shane-huang @liangs6212 please have a look and provide some feedback if possible.

shane-huang commented 2 years ago

I think we should make our default install a minimum viable option for the most common or our most recommended usage. In this sense,

shane-huang commented 2 years ago

maybe separate the lightweight-hpo with other all options, since it only requires optuna and configspace.

shane-huang commented 2 years ago

Another related topic we may consider. Shall we separate chornos further into chronos-forecaster, chronos-detector, and chronos-simulator, chronos-data?

TheaperDeng commented 2 years ago

Another related topic we may consider. Shall we separate chornos further into chronos-forecaster, chronos-detector, and chronos-simulator, chronos-data?

This might make Chronos too complex, and since detector and simulator are not that large and mature, we may just stick to one bigdl-chronos for now.