intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.64k stars 1.26k forks source link

Chronos: a better unscale support is needed #5791

Open plusbang opened 2 years ago

plusbang commented 2 years ago

Currently, we provide TSDataset.unscale_numpy to get unscaled numpy ndarray. We do need a better unscale support rather than ask users to convert to numpy ndarray.

example:

yhat = forecaster.predict(x)
yhat_unscaled = tsdata_test.unscale_numpy(yhat)
y_unscaled = tsdata_test.unscale_numpy(y)
Evaluator.evaluate("mse", y_unscaled, yhat_unscaled)
TheaperDeng commented 2 years ago

maybe add a collate_fn on users' dataloader could be a helpful feature. but we need to define the api first.