YHYHYHYHYHY / ATFNet

Official implement of ATFNet: Adaptive Time-Frequency Ensembled Network for Long-term Time Series Forecasting
65 stars 6 forks source link

need a help #4

Open wangwei-aigou opened 3 months ago

wangwei-aigou commented 3 months ago

Please forgive a beginner's question:

I got the following results using these commands respectively: (the first time without setting the random seed, the second time with the random seed set)

  1. python run.py --model ATFNet --data custom

    long_term_forecast_test_ATFNet_custom_ftM_sl96_ll48_pl96_dm512_nh8_el2_dl1_df2048_fc1_ebtimeF_dtTrue_test_0
    mse:0.4213832164222476, mae:0.44682774889074733

  2. python run.py --model ATFNet --data custom --features MS --seq_len 192 --label_len 96 --pred_len 192

    long_term_forecast_test_ATFNet_custom_ftMS_sl192_ll96_pl192_dm512_nh8_el2_dl1_df2048_fc1_ebtimeF_dtTrue_test_0
    mse:0.11834045227729217, mae:0.2682139742565186

And the comparison graphs obtained both times are quite poor. Could you tell me the reason for this? 80.pdf

Thank you very much!

YHYHYHYHYHY commented 3 months ago

If you execute the python run.py command directly, any unspecified hyperparameters will take on their default values. In your specific case, the experiments were conducted using the ETTh1 dataset, and the obtained results are expected to closely align with those presented in the original paper (Table 9 and Table 10) with minor variations.

Regarding the comparison graphs, assuming you are referring to the visualization of predictions and ground truth, the forecasting performance appears to be unsatisfactory for the segment you provided. This outcome can be attributed to the characteristics of the data. The testing segment might significantly differ from the segments in the training phase, making it challenging for the model trained on the training set to accurately forecast in this particular segment. Furthermore, this segment lacks clear trends or seasonal patterns, and similar segments may not have been present in the training set, making them inherently unpredictable. Trying alternative models may yield similar outcomes.

It is important to note that every model will have its share of unfavorable cases, and evaluations should primarily focus on the overall forecasting performance across the entire test set, considering metrics such as Mean Squared Error (MSE) and Mean Absolute Error (MAE).