MAZiqing / FEDformer

MIT License
625 stars 118 forks source link

FEDformer model training results are not ideal #43

Closed fizzking closed 1 year ago

fizzking commented 1 year ago

I have downloaded the ETTh1 data set and one of the weather data sets, and run the same data set with FEDformer, Autoformer, Informer and Transformer provided by you respectively, but the effect is different from that shown in your paper. The result of my running is as follows: I made the input length and the prediction length both 96. Is where I did not adjust the hyperparameter? image

tianzhou2011 commented 1 year ago

python -u run.py --is_training 1 --root_path ./dataset/ETT-small/ --data_path ETTh1.csv --task_id ETTh1 --model FEDformer --data ETTh1 --features M --seq_len 96 --label_len 48 --pred_len 96 --e_layers 2 --d_layers 1 --factor 3 --enc_in 7 --dec_in 7 --c_out 7 --des 'Exp' --d_model 512 --itr 3 Just Run the above comand minutes ago, The results for running ETTh1 using Fedformer were the following. Not sure what's wrong in your case. image