intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.52k stars 1.24k forks source link

Not support torch dataloader in predict API in chronos, but support in model fit(). Cause accuracy issue if use torch dataloader predict.. #4609

Open yangqing-yq opened 2 years ago

yangqing-yq commented 2 years ago

In file https://github.com/intel-analytics/BigDL/python/chronos/src/bigdl/chronos/forecaster/base_forecaster.py, the predict() does not support torch dataloader, but the fit API does support. If user use the 3rd option as dataloader to fit model, and then predict, there will be accuracy issue.

def predict(self, data, batch_size=32, quantize=False):
    """
    Predict using a trained forecaster.

    if you want to predict on a single node(which is common practice), please call
    .to_local().predict(x, ...)

    :param data: The data support following formats:

           | 1. a numpy ndarray x:
           | x's shape is (num_samples, lookback, feature_dim) where lookback and feature_dim
           | should be the same as past_seq_len and input_feature_num.
           | 2. a xshard item:
           | each partition can be a dictionary of {'x': x}, where x's shape
           | should follow the shape stated before.

def fit(self, data, epochs=1, batch_size=32):
    # TODO: give an option to close validation during fit to save time.
    """
    Fit(Train) the forecaster.

    :param data: The data support following formats:

           | 1. a numpy ndarray tuple (x, y):
           | x's shape is (num_samples, lookback, feature_dim) where lookback and feature_dim
           | should be the same as past_seq_len and input_feature_num.
           | y's shape is (num_samples, horizon, target_dim), where horizon and target_dim
           | should be the same as future_seq_len and output_feature_num.
           |
           | 2. a xshard item:
           | each partition can be a dictionary of {'x': x, 'y': y}, where x and y's shape
           | should follow the shape stated before.
           |
           | 3. pytorch dataloader:
           | the dataloader should return x, y in each iteration with the shape as following:
           | x's shape is (num_samples, lookback, feature_dim) where lookback and feature_dim
           | should be the same as past_seq_len and input_feature_num.
           | y's shape is (num_samples, horizon, target_dim), where horizon and target_dim
           | should be the same as future_seq_len and output_feature_num.
Epoch Train data MSE Validation data MSE
2 2.3 4.1
5 1.54 14.9
7 1.3 15.8
10 1.07 5.1
13 0.8 12.9
15 0.8 2.7
18 0.7 7.0
30 0.7 8.2
TheaperDeng commented 2 years ago

thx for the issue report, we will check the model's accuracy if it is trained by a customized pytorch dataloader and support this type in predict and evaluation soon