Closed The-Boyy closed 2 years ago
作者您好,我非常喜欢SCINet模型,但是当我将SCINet用在ETTm1上时,发现结果和论文中的结果差别非常大。 我用的是readme中提供的这组参数训练: python run_ETTh.py --data ETTm1 --features M --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3
训练: (pytorch) yangbs@hdu-lab:~/python_file/SCINet-main$ python run_ETTh.py --data ETTm1 --features M --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3 Args in experiment: Namespace(INN=1, RIN=False, batch_size=32, c_out=7, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTm1', data_path='ETTm1.csv', dec_in=7, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=7, evaluate=False, features='M', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=24, lastWeight=1.0, levels=3, loss='mae', lr=0.005, lradj=1, model='SCINet', model_name='ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3', num_decoder_layer=1, num_workers=0, patience=5, positionalEcoding=False, pred_len=24, resume=False, root_path='./datasets/ETT-data/', save=False, seq_len=48, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12) SCINet( (blocks1): EncoderTree( (SCINet_Tree): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) ) ) (projection1): Conv1d(48, 24, kernel_size=(1,), stride=(1,), bias=False) (div_projection): ModuleList() )
start training : SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0>>>>>>>>>>>>>>>>>>>>>>>>>> train 34489 val 11497 test 11497 exp/ETT_checkpoints/SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0 iters: 100, epoch: 1 | loss: 0.3739801 speed: 0.0367s/iter; left time: 3946.8521s iters: 200, epoch: 1 | loss: 0.3262153 speed: 0.0373s/iter; left time: 4006.7345s iters: 300, epoch: 1 | loss: 0.3147443 speed: 0.0354s/iter; left time: 3798.6481s iters: 400, epoch: 1 | loss: 0.2697085 speed: 0.0367s/iter; left time: 3937.1097s iters: 500, epoch: 1 | loss: 0.3082295 speed: 0.0386s/iter; left time: 4141.7307s iters: 600, epoch: 1 | loss: 0.3121381 speed: 0.0361s/iter; left time: 3865.2089s iters: 700, epoch: 1 | loss: 0.2578067 speed: 0.0372s/iter; left time: 3978.2897s iters: 800, epoch: 1 | loss: 0.2487013 speed: 0.0364s/iter; left time: 3890.3315s iters: 900, epoch: 1 | loss: 0.3098436 speed: 0.0361s/iter; left time: 3851.4770s iters: 1000, epoch: 1 | loss: 0.2831023 speed: 0.0366s/iter; left time: 3904.9395s Epoch: 1 cost time: 39.633970499038696 --------start to validate----------- normed mse:0.3765, mae:0.3829, rmse:0.6136, mape:1.5322, mspe:98.9865, corr:0.8636 denormed mse:7.5041, mae:1.3557, rmse:2.7394, mape:inf, mspe:inf, corr:0.8636 --------start to test----------- normed mse:0.4453, mae:0.4023, rmse:0.6673, mape:2.0176, mspe:239.7528, corr:0.6953 denormed mse:9.6947, mae:1.4535, rmse:3.1136, mape:inf, mspe:inf, corr:0.6953 Epoch: 1, Steps: 1077 | Train Loss: 0.3188260 valid Loss: 0.3829074 Test Loss: 0.4023201 Validation loss decreased (inf --> 0.382907). Saving model ... Updating learning rate to 0.00475 iters: 100, epoch: 2 | loss: 0.2759543 speed: 0.1175s/iter; left time: 12520.1546s iters: 200, epoch: 2 | loss: 0.2776425 speed: 0.0372s/iter; left time: 3960.5041s iters: 300, epoch: 2 | loss: 0.2510246 speed: 0.0364s/iter; left time: 3867.7107s iters: 400, epoch: 2 | loss: 0.2699694 speed: 0.0363s/iter; left time: 3858.9464s iters: 500, epoch: 2 | loss: 0.2756911 speed: 0.0349s/iter; left time: 3704.1203s iters: 600, epoch: 2 | loss: 0.2634075 speed: 0.0353s/iter; left time: 3741.4141s iters: 700, epoch: 2 | loss: 0.2871340 speed: 0.0322s/iter; left time: 3414.6127s iters: 800, epoch: 2 | loss: 0.2899915 speed: 0.0348s/iter; left time: 3682.8367s iters: 900, epoch: 2 | loss: 0.2889665 speed: 0.0324s/iter; left time: 3425.9789s iters: 1000, epoch: 2 | loss: 0.2933974 speed: 0.0337s/iter; left time: 3561.6619s Epoch: 2 cost time: 37.750977993011475 --------start to validate----------- normed mse:0.3650, mae:0.3797, rmse:0.6042, mape:1.5390, mspe:105.6250, corr:0.8662 denormed mse:7.2495, mae:1.3509, rmse:2.6925, mape:inf, mspe:inf, corr:0.8662 --------start to test----------- normed mse:0.4244, mae:0.3921, rmse:0.6515, mape:2.0092, mspe:239.8366, corr:0.7195 denormed mse:9.3696, mae:1.4211, rmse:3.0610, mape:inf, mspe:inf, corr:0.7195 Epoch: 2, Steps: 1077 | Train Loss: 0.2805793 valid Loss: 0.3797391 Test Loss: 0.3921406 Validation loss decreased (0.382907 --> 0.379739). Saving model ... Updating learning rate to 0.0045125 iters: 100, epoch: 3 | loss: 0.2760756 speed: 0.1095s/iter; left time: 11547.2991s iters: 200, epoch: 3 | loss: 0.2942024 speed: 0.0349s/iter; left time: 3681.6949s iters: 300, epoch: 3 | loss: 0.2703648 speed: 0.0358s/iter; left time: 3764.4492s iters: 400, epoch: 3 | loss: 0.2604382 speed: 0.0321s/iter; left time: 3376.7976s iters: 500, epoch: 3 | loss: 0.2719360 speed: 0.0346s/iter; left time: 3636.1447s iters: 600, epoch: 3 | loss: 0.3051446 speed: 0.0332s/iter; left time: 3480.0340s iters: 700, epoch: 3 | loss: 0.2547747 speed: 0.0357s/iter; left time: 3741.3550s iters: 800, epoch: 3 | loss: 0.2777190 speed: 0.0370s/iter; left time: 3875.6183s iters: 900, epoch: 3 | loss: 0.2795755 speed: 0.0360s/iter; left time: 3762.1882s iters: 1000, epoch: 3 | loss: 0.2347599 speed: 0.0361s/iter; left time: 3775.3991s Epoch: 3 cost time: 37.68521690368652 --------start to validate----------- normed mse:0.3767, mae:0.3858, rmse:0.6137, mape:1.6021, mspe:118.2857, corr:0.8611 denormed mse:7.5594, mae:1.3794, rmse:2.7494, mape:inf, mspe:inf, corr:0.8611 --------start to test----------- normed mse:0.4306, mae:0.3893, rmse:0.6562, mape:2.1228, mspe:263.4662, corr:0.7101 denormed mse:9.6026, mae:1.4040, rmse:3.0988, mape:inf, mspe:inf, corr:0.7101 Epoch: 3, Steps: 1077 | Train Loss: 0.2734880 valid Loss: 0.3857621 Test Loss: 0.3892814 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.004286875 iters: 100, epoch: 4 | loss: 0.2904731 speed: 0.1151s/iter; left time: 12008.3887s iters: 200, epoch: 4 | loss: 0.2831732 speed: 0.0366s/iter; left time: 3812.8340s iters: 300, epoch: 4 | loss: 0.2510524 speed: 0.0368s/iter; left time: 3829.7625s iters: 400, epoch: 4 | loss: 0.2525309 speed: 0.0358s/iter; left time: 3725.9019s iters: 500, epoch: 4 | loss: 0.3122713 speed: 0.0375s/iter; left time: 3902.5162s iters: 600, epoch: 4 | loss: 0.2780880 speed: 0.0378s/iter; left time: 3927.4578s iters: 700, epoch: 4 | loss: 0.2514291 speed: 0.0358s/iter; left time: 3715.8615s iters: 800, epoch: 4 | loss: 0.2489728 speed: 0.0382s/iter; left time: 3959.9480s iters: 900, epoch: 4 | loss: 0.2793048 speed: 0.0374s/iter; left time: 3869.3397s iters: 1000, epoch: 4 | loss: 0.3334440 speed: 0.0360s/iter; left time: 3724.1339s Epoch: 4 cost time: 39.77986168861389 --------start to validate----------- normed mse:0.3818, mae:0.3851, rmse:0.6179, mape:1.5316, mspe:107.1045, corr:0.8611 denormed mse:7.7168, mae:1.3784, rmse:2.7779, mape:inf, mspe:inf, corr:0.8611 --------start to test----------- normed mse:0.4437, mae:0.3990, rmse:0.6661, mape:2.0132, mspe:225.4112, corr:0.7100 denormed mse:9.7923, mae:1.4451, rmse:3.1293, mape:inf, mspe:inf, corr:0.7100 Epoch: 4, Steps: 1077 | Train Loss: 0.2690475 valid Loss: 0.3850551 Test Loss: 0.3990499 EarlyStopping counter: 2 out of 5 Updating learning rate to 0.00407253125 iters: 100, epoch: 5 | loss: 0.2933790 speed: 0.1150s/iter; left time: 11882.3186s iters: 200, epoch: 5 | loss: 0.2759682 speed: 0.0381s/iter; left time: 3934.1997s iters: 300, epoch: 5 | loss: 0.2519405 speed: 0.0356s/iter; left time: 3665.0695s iters: 400, epoch: 5 | loss: 0.2472868 speed: 0.0365s/iter; left time: 3763.1916s iters: 500, epoch: 5 | loss: 0.2710534 speed: 0.0365s/iter; left time: 3757.1487s iters: 600, epoch: 5 | loss: 0.2372332 speed: 0.0351s/iter; left time: 3607.9622s iters: 700, epoch: 5 | loss: 0.2341183 speed: 0.0363s/iter; left time: 3730.8824s iters: 800, epoch: 5 | loss: 0.2868078 speed: 0.0376s/iter; left time: 3858.2005s iters: 900, epoch: 5 | loss: 0.2986693 speed: 0.0368s/iter; left time: 3768.4641s iters: 1000, epoch: 5 | loss: 0.2259211 speed: 0.0368s/iter; left time: 3766.6636s Epoch: 5 cost time: 39.46049451828003 --------start to validate----------- normed mse:0.3734, mae:0.3856, rmse:0.6111, mape:1.5616, mspe:108.8203, corr:0.8622 denormed mse:7.4035, mae:1.3714, rmse:2.7209, mape:inf, mspe:inf, corr:0.8622 --------start to test----------- normed mse:0.4317, mae:0.3961, rmse:0.6570, mape:2.0357, mspe:238.6893, corr:0.7124 denormed mse:9.5342, mae:1.4352, rmse:3.0878, mape:inf, mspe:inf, corr:0.7124 Epoch: 5, Steps: 1077 | Train Loss: 0.2656843 valid Loss: 0.3856070 Test Loss: 0.3961082 EarlyStopping counter: 3 out of 5 Updating learning rate to 0.003868904687499999 iters: 100, epoch: 6 | loss: 0.2594722 speed: 0.1180s/iter; left time: 12056.5803s iters: 200, epoch: 6 | loss: 0.2736794 speed: 0.0366s/iter; left time: 3740.0203s iters: 300, epoch: 6 | loss: 0.2381817 speed: 0.0358s/iter; left time: 3650.3643s iters: 400, epoch: 6 | loss: 0.3105860 speed: 0.0366s/iter; left time: 3728.1291s iters: 500, epoch: 6 | loss: 0.3017042 speed: 0.0377s/iter; left time: 3843.1407s iters: 600, epoch: 6 | loss: 0.2300297 speed: 0.0381s/iter; left time: 3871.6219s iters: 700, epoch: 6 | loss: 0.2827681 speed: 0.0377s/iter; left time: 3831.4567s iters: 800, epoch: 6 | loss: 0.2552932 speed: 0.0367s/iter; left time: 3725.0346s iters: 900, epoch: 6 | loss: 0.2631693 speed: 0.0348s/iter; left time: 3524.3761s iters: 1000, epoch: 6 | loss: 0.2475155 speed: 0.0342s/iter; left time: 3465.3240s Epoch: 6 cost time: 39.30917811393738 --------start to validate----------- normed mse:0.3922, mae:0.3959, rmse:0.6263, mape:1.5281, mspe:107.3200, corr:0.8561 denormed mse:7.7303, mae:1.4168, rmse:2.7803, mape:inf, mspe:inf, corr:0.8561 --------start to test----------- normed mse:0.4305, mae:0.4053, rmse:0.6561, mape:1.9767, mspe:210.2253, corr:0.7186 denormed mse:9.3486, mae:1.4834, rmse:3.0576, mape:inf, mspe:inf, corr:0.7186 Epoch: 6, Steps: 1077 | Train Loss: 0.2634311 valid Loss: 0.3958629 Test Loss: 0.4052754 EarlyStopping counter: 4 out of 5 Updating learning rate to 0.003675459453124999 iters: 100, epoch: 7 | loss: 0.2738683 speed: 0.1171s/iter; left time: 11838.4515s iters: 200, epoch: 7 | loss: 0.2384376 speed: 0.0355s/iter; left time: 3591.3280s iters: 300, epoch: 7 | loss: 0.2410344 speed: 0.0386s/iter; left time: 3899.4886s iters: 400, epoch: 7 | loss: 0.2687979 speed: 0.0369s/iter; left time: 3716.0695s iters: 500, epoch: 7 | loss: 0.2845374 speed: 0.0362s/iter; left time: 3643.1523s iters: 600, epoch: 7 | loss: 0.2612757 speed: 0.0376s/iter; left time: 3784.8979s iters: 700, epoch: 7 | loss: 0.2670177 speed: 0.0373s/iter; left time: 3748.7166s iters: 800, epoch: 7 | loss: 0.2495043 speed: 0.0365s/iter; left time: 3668.4008s iters: 900, epoch: 7 | loss: 0.2436933 speed: 0.0351s/iter; left time: 3525.0689s iters: 1000, epoch: 7 | loss: 0.2172739 speed: 0.0366s/iter; left time: 3670.3609s Epoch: 7 cost time: 39.5423789024353 --------start to validate----------- normed mse:0.3880, mae:0.3938, rmse:0.6229, mape:1.6160, mspe:124.2217, corr:0.8533 denormed mse:8.0379, mae:1.4364, rmse:2.8351, mape:inf, mspe:inf, corr:0.8533 --------start to test----------- normed mse:0.4276, mae:0.3934, rmse:0.6539, mape:2.0801, mspe:240.9516, corr:0.7048 denormed mse:9.4629, mae:1.4191, rmse:3.0762, mape:inf, mspe:inf, corr:0.7048 Epoch: 7, Steps: 1077 | Train Loss: 0.2614641 valid Loss: 0.3938183 Test Loss: 0.3934116 EarlyStopping counter: 5 out of 5 Early stopping save model in exp/ETT_checkpoints/SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0/ETTm124.bin testing : SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< test 11497 normed mse:0.4244, mae:0.3921, rmse:0.6515, mape:2.0092, mspe:239.8366, corr:0.7195 TTTT denormed mse:9.3696, mae:1.4211, rmse:3.0610, mape:inf, mspe:inf, corr:0.7195 Final mean normed mse:0.4244,mae:0.3921,denormed mse:9.3696,mae:1.4211
结果我发现只7个epoch训练就停止了,最终的结果和论文中相差甚大。这其中可能存在哪些问题呢?我一直困惑于ETTm1的复现问题,如果您能回答我,我将非常感激。 谢谢!
因为我们后续又调整了模型,所以实验结果可能会有一些出入。我刚才试了一下您提到的这个实验,这是我的运行的结果
您在其他数据集上会遇到类似的问题吗?因为您这个和我们运行的结果相差很大,我们暂时不能确定是什么原因导致的
作者您好,我非常喜欢SCINet模型,但是当我将SCINet用在ETTm1上时,发现结果和论文中的结果差别非常大。 我用的是readme中提供的这组参数训练: python run_ETTh.py --data ETTm1 --features M --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3
训练: (pytorch) yangbs@hdu-lab:~/python_file/SCINet-main$ python run_ETTh.py --data ETTm1 --features M --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3 Args in experiment: Namespace(INN=1, RIN=False, batch_size=32, c_out=7, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTm1', data_path='ETTm1.csv', dec_in=7, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=7, evaluate=False, features='M', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=24, lastWeight=1.0, levels=3, loss='mae', lr=0.005, lradj=1, model='SCINet', model_name='ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3', num_decoder_layer=1, num_workers=0, patience=5, positionalEcoding=False, pred_len=24, resume=False, root_path='./datasets/ETT-data/', save=False, seq_len=48, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12) SCINet( (blocks1): EncoderTree( (SCINet_Tree): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(7, 28, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(28, 7, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) ) ) (projection1): Conv1d(48, 24, kernel_size=(1,), stride=(1,), bias=False) (div_projection): ModuleList() )
结果我发现只7个epoch训练就停止了,最终的结果和论文中相差甚大。这其中可能存在哪些问题呢?我一直困惑于ETTm1的复现问题,如果您能回答我,我将非常感激。 谢谢!