intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.47k stars 1.24k forks source link

How can we enable GPU training on chronos #8558

Open pankajr141 opened 1 year ago

pankajr141 commented 1 year ago

When i run chronos based on default argument i get something like

[ INFO ] Created a temporary directory at /tmp/tmpsd1jejn6
[ INFO ] Writing /tmp/tmpsd1jejn6/_remote_module_non_scriptable.py
[ INFO ] GPU available: True, used: False
[ INFO ] TPU available: False, using: 0 TPU cores
[ INFO ] IPU available: False, using: 0 IPUs
[ INFO ] HPU available: False, using: 0 HPUs
[ INFO ] 
  | Name  | Type             | Params
-------------------------------------------
0 | model | NormalizeTSModel | 41.2 K
1 | loss  | MSELoss          | 0     
-------------------------------------------

As you can see above Even though GPU is available it cannot be used. In documentation it is also not clear how to enable it.

Can you please take this example and explain where i can set GPU or CPU https://github.com/intel-analytics/BigDL/blob/main/python/chronos/example/auto_model/autolstm_nyc_taxi.py

jason-dai commented 1 year ago

Currently GPU is not supported.