thuml / iTransformer

Official implementation for "iTransformer: Inverted Transformers Are Effective for Time Series Forecasting" (ICLR 2024 Spotlight), https://openreview.net/forum?id=JePfAI8fah
https://arxiv.org/abs/2310.06625
MIT License
901 stars 168 forks source link

预测结果很差? #52

Closed kerry999 closed 5 months ago

kerry999 commented 5 months ago

你好,

我用命令bash ./scripts/multivariate_forecasting/Traffic/iTransformer.sh,结果很差,只有周期(这个直接FFT就行,不需要ai),没有其他信息。

截屏2024-01-30 11 59 50

3140.pdf

excahnge的结果也很差:

截屏2024-01-30 11 50 52

780.pdf

我在Mac mini M2上运行的算法。对脚本做了少量改动,把cuda换成了mps。类似这样: if torch.backends.mps.is_available(): device = torch.device("mps")

什么地方能看到你们官方的test_results目录的内容吗?

kerry999 commented 5 months ago

附:脚本正常运行结束,没有报错 run_exchange.log

WenWeiTHU commented 5 months ago

您好,请尝试使用cuda环境和原始脚本复现结果

aiot-tech commented 3 months ago

met the same thing like this when running the original script

aiot-tech commented 3 months ago

can you the author provide the result that you ran with the exchange dataset?

ZDandsomSP commented 3 months ago

@aiot-tech image

Hello, we have repeated the experiment of exchange dataset script on our platform and the results are consistent with the paper.

Please confirm that your experimental environment (including Python and Torch versions) is consistent with the requirements we provide. If necessary, please contact us to provide ckpt

aiot-tech commented 3 months ago
Updating learning rate to 3.90625e-07
    iters: 100, epoch: 10 | loss: 0.1565858
    speed: 0.1291s/iter; left time: 7.8743s
Epoch: 10 cost time: 3.715930461883545
Epoch: 10, Steps: 160 | Train Loss: 0.1216311 Vali Loss: 0.1234729 Test Loss: 0.0862727
EarlyStopping counter: 3 out of 3
Early stopping
>>>>>>>testing : Exchange_96_96_iTransformer_custom_M_ft96_sl48_ll96_pl128_dm8_nh2_el1_dl128_df1_fctimeF_ebTrue_dtExp_projection_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 1422
test shape: (1422, 1, 96, 8) (1422, 1, 96, 8)
test shape: (1422, 96, 8) (1422, 96, 8)
mse:0.08620689809322357, mae:0.20608624815940857

I got the same results as you provided above, but the predictions I got on the testset were very bad. The entire prediction curve resembled a straight line and seemed to be fitting the mean of the true values ​​without being able to predict accurately. What are your results on the testset? Can you provide this part of the results, such as prediction charts?

aiot-tech commented 3 months ago

你好,

我用命令bash ./scripts/multivariate_forecasting/Traffic/iTransformer.sh,结果很差,只有周期(这个直接FFT就行,不需要ai),没有其他信息。 截屏2024-01-30 11 59 50 3140.pdf

excahnge的结果也很差: 截屏2024-01-30 11 50 52 780.pdf

我在Mac mini M2上运行的算法。对脚本做了少量改动,把cuda换成了mps。类似这样: if torch.backends.mps.is_available():如果 torch.backends.mps.is_available(): device = torch.device("mps") 设备 = torch.device("mps")

什么地方能看到你们官方的test_results目录的内容吗?

get the similar result on the testset

ZDandsomSP commented 3 months ago

@aiot-tech Hello, after checking the experimental results, we have confirmed the issue you mentioned. Firstly, this repository is mainly responsible for reproducing the main indicators in the paper. We also see that you have reproduced the experimental results reported in our paper, so we are glad that our code did not encounter any essential issues in your environment. Secondly, the exchage_rate dataset is widely considered to lack stationarity and is actually quite difficult to predict. The visualization phenomenon you mentioned has also been found in our experimental environment, but this is actually the bottleneck of all deep network methods. Currently, it can only fit the average value of this non-stationary series. You can visualize some results from other baselines to verify this point.

0fa3541fc41d9d3b2d118daba5b8705095983bc64e00576106a48c397c87e3

In addition, regarding the issue of reproducing the effects of the Traffic dataset you mentioned, please ensure that your experimental environment is consistent with ours. We conducted experiments on Ubuntu servers based on x86 architecture and provided the following log outputs: image