intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.27k stars 1.23k forks source link

chronos #7486

Open YUKUN-XIAO opened 1 year ago

YUKUN-XIAO commented 1 year ago

Can chronos run on the ray cluster I built myself.

TheaperDeng commented 1 year ago

Hi,

Yeah, Chronos could be run on a ray cluster, please refer to following guides

  1. Chronos Forecaster (training & inferencing)

    You could use native ray cluster for this function. Please refer to

    a. document b. example c. API doc 1 ; API doc 2

  2. XShardsTSDataset (data processing) This function needs spark cluster. a. document b. example c. API doc 1; API doc 2

  3. Chronos AutoTS (automl on ts analysis) This function needs spark cluster and RayonSpark function

    a. document b. example c. API doc 1; API doc 2