icannotnamemyself / FAN

Apache License 2.0
6 stars 3 forks source link

可以提供下训练脚本吗,我调整了训练集来归一化后,效果不理想。 #2

Open iuaku opened 4 days ago

iuaku commented 4 days ago

您好,可以提供下训练脚本吗,我调整了训练集来归一化后,效果不理想。还是默认按照readme中的命令运行所以模型即可

wayne155 commented 4 days ago

您怎么跑实验的,可以详细点吗,是论文中的数据集吗

wayne155 commented 4 days ago

注意K的选择,变化频率越大要选的越多,notebooks里面有个代码可以大概算个值,如果是论文的数据集参考我们的值

wayne155 commented 4 days ago

训练脚本就是reademe里面的,修改你想要的即可

iuaku commented 4 days ago

这是我的命令, ./scripts/run_fan_wandb.sh "SCINet" "FAN" "ETTh1 " "96 168 336 720" "cuda:0" 96 "{freq_topk:4}" 结果大概是这么多 image

wayne155 commented 4 days ago

等他跑完?SCINet本身训练就比较慢,你可以先试试DLinear

wayne155 commented 4 days ago

你有跑完的结果吗

iuaku commented 4 days ago

还有一个点就是这里改了。。。 image

iuaku commented 4 days ago

这个位置差不多已经跑完了,earlystop了5个epoch了已经

wayne155 commented 4 days ago

这是我的训练日志,你改了validation的话肯定会比较大的,你把K弄大点试试,我怕我论文里的K是不是搞错了

[2024-04-25 20:08:33] - Total Trainable Params: 59798 [2024-04-25 20:08:33] - model parameters: 59798 [2024-04-25 20:08:57] - Epoch: 1 cost time: 23.695834636688232 [2024-04-25 20:08:57] - Traininng loss : 0.9683223977050883 [2024-04-25 20:09:02] - vali_results: {'mae': 0.4461630880832672, 'mape': 5.618561267852783, 'mse': 0.41370654106140137, 'rmse': 0.6432002186775208} [2024-04-25 20:09:05] - test_results: {'mae': 0.4558417499065399, 'mape': 8.854572296142578, 'mse': 0.40061473846435547, 'rmse': 0.6329413652420044} [2024-04-25 20:09:28] - Epoch: 2 cost time: 54.84706997871399 [2024-04-25 20:09:28] - Traininng loss : 0.7606826324729209 [2024-04-25 20:09:34] - vali_results: {'mae': 0.42991191148757935, 'mape': 5.251162052154541, 'mse': 0.3961555063724518, 'rmse': 0.6294088363647461} [2024-04-25 20:09:37] - test_results: {'mae': 0.44149401783943176, 'mape': 7.693170547485352, 'mse': 0.3823273181915283, 'rmse': 0.6183262467384338} [2024-04-25 20:10:00] - Epoch: 3 cost time: 87.19757437705994 [2024-04-25 20:10:00] - Traininng loss : 0.7412603234357023 [2024-04-25 20:10:06] - vali_results: {'mae': 0.42768004536628723, 'mape': 5.149500846862793, 'mse': 0.39203500747680664, 'rmse': 0.6261270046234131} [2024-04-25 20:10:09] - test_results: {'mae': 0.4369499683380127, 'mape': 7.144054889678955, 'mse': 0.37618961930274963, 'rmse': 0.6133430004119873} [2024-04-25 20:10:33] - Epoch: 4 cost time: 119.45310282707214 [2024-04-25 20:10:33] - Traininng loss : 0.7319397465821277 [2024-04-25 20:10:38] - vali_results: {'mae': 0.4232451617717743, 'mape': 5.142523288726807, 'mse': 0.3868127763271332, 'rmse': 0.6219427585601807} [2024-04-25 20:10:42] - test_results: {'mae': 0.4308702349662781, 'mape': 7.186767578125, 'mse': 0.3688293695449829, 'rmse': 0.6073132157325745} [2024-04-25 20:11:05] - Epoch: 5 cost time: 152.260644197464 [2024-04-25 20:11:05] - Traininng loss : 0.7267800080490873 [2024-04-25 20:11:10] - vali_results: {'mae': 0.4202914834022522, 'mape': 5.216790199279785, 'mse': 0.3844988942146301, 'rmse': 0.620079755783081} [2024-04-25 20:11:13] - test_results: {'mae': 0.42954549193382263, 'mape': 7.221964359283447, 'mse': 0.3676564693450928, 'rmse': 0.606346845626831} [2024-04-25 20:11:36] - Epoch: 6 cost time: 182.58972430229187 [2024-04-25 20:11:36] - Traininng loss : 0.7221398654770343 [2024-04-25 20:11:41] - vali_results: {'mae': 0.41987770795822144, 'mape': 5.235918045043945, 'mse': 0.3848990797996521, 'rmse': 0.6204023361206055} [2024-04-25 20:11:44] - test_results: {'mae': 0.42870262265205383, 'mape': 7.091451644897461, 'mse': 0.3666110634803772, 'rmse': 0.605484127998352} [2024-04-25 20:12:07] - Epoch: 7 cost time: 214.27082800865173 [2024-04-25 20:12:07] - Traininng loss : 0.7171909874106975 [2024-04-25 20:12:13] - vali_results: {'mae': 0.4193999469280243, 'mape': 5.092340469360352, 'mse': 0.3839782774448395, 'rmse': 0.6196597814559937} [2024-04-25 20:12:17] - test_results: {'mae': 0.42933395504951477, 'mape': 6.646405220031738, 'mse': 0.36695989966392517, 'rmse': 0.6057721376419067} [2024-04-25 20:12:39] - Epoch: 8 cost time: 245.95861840248108 [2024-04-25 20:12:39] - Traininng loss : 0.7131920203249505 [2024-04-25 20:12:44] - vali_results: {'mae': 0.4196569621562958, 'mape': 5.212482929229736, 'mse': 0.38477006554603577, 'rmse': 0.6202983856201172} [2024-04-25 20:12:48] - test_results: {'mae': 0.4299030303955078, 'mape': 6.6728715896606445, 'mse': 0.3680388033390045, 'rmse': 0.6066620349884033} [2024-04-25 20:13:12] - Epoch: 9 cost time: 278.7359387874603 [2024-04-25 20:13:12] - Traininng loss : 0.7097570454662151 [2024-04-25 20:13:16] - vali_results: {'mae': 0.41978010535240173, 'mape': 5.382425785064697, 'mse': 0.38314199447631836, 'rmse': 0.6189846396446228} [2024-04-25 20:13:22] - test_results: {'mae': 0.42532065510749817, 'mape': 6.990736484527588, 'mse': 0.36187201738357544, 'rmse': 0.6015579700469971} [2024-04-25 20:13:46] - Epoch: 10 cost time: 312.41159319877625 [2024-04-25 20:13:46] - Traininng loss : 0.705459713935852 [2024-04-25 20:13:50] - vali_results: {'mae': 0.41802337765693665, 'mape': 5.177587032318115, 'mse': 0.3806135356426239, 'rmse': 0.6169388294219971} [2024-04-25 20:13:54] - test_results: {'mae': 0.4280519187450409, 'mape': 6.596354007720947, 'mse': 0.3645992875099182, 'rmse': 0.6038205623626709} [2024-04-25 20:14:17] - Epoch: 11 cost time: 343.349990606308 [2024-04-25 20:14:17] - Traininng loss : 0.703900721716754 [2024-04-25 20:14:21] - vali_results: {'mae': 0.41673263907432556, 'mape': 5.279829978942871, 'mse': 0.3814280927181244, 'rmse': 0.6175986528396606} [2024-04-25 20:14:25] - test_results: {'mae': 0.4280064105987549, 'mape': 6.787834167480469, 'mse': 0.3653193414211273, 'rmse': 0.60441654920578} [2024-04-25 20:14:52] - Epoch: 12 cost time: 378.58416199684143 [2024-04-25 20:14:52] - Traininng loss : 0.7024511639425095 [2024-04-25 20:14:57] - vali_results: {'mae': 0.41742080450057983, 'mape': 5.191147327423096, 'mse': 0.3815731108188629, 'rmse': 0.6177160143852234} [2024-04-25 20:15:01] - test_results: {'mae': 0.4289223253726959, 'mape': 6.7511210441589355, 'mse': 0.3658641576766968, 'rmse': 0.6048670411109924} [2024-04-25 20:15:25] - Epoch: 13 cost time: 412.2621078491211 [2024-04-25 20:15:25] - Traininng loss : 0.6994962165171795 [2024-04-25 20:15:31] - vali_results: {'mae': 0.4199606478214264, 'mape': 5.054266452789307, 'mse': 0.3824893534183502, 'rmse': 0.6184572577476501} [2024-04-25 20:15:34] - test_results: {'mae': 0.4311707615852356, 'mape': 6.435349941253662, 'mse': 0.3674392104148865, 'rmse': 0.6061676144599915} [2024-04-25 20:15:59] - Epoch: 14 cost time: 445.4104995727539 [2024-04-25 20:15:59] - Traininng loss : 0.6979518079377235 [2024-04-25 20:16:03] - vali_results: {'mae': 0.42210161685943604, 'mape': 5.151847839355469, 'mse': 0.38280436396598816, 'rmse': 0.6187118291854858} [2024-04-25 20:16:07] - test_results: {'mae': 0.43115749955177307, 'mape': 6.314486026763916, 'mse': 0.36726295948028564, 'rmse': 0.6060222387313843} [2024-04-25 20:16:32] - Epoch: 15 cost time: 478.40935730934143 [2024-04-25 20:16:32] - Traininng loss : 0.694735327775174 [2024-04-25 20:16:37] - vali_results: {'mae': 0.419955313205719, 'mape': 5.345251083374023, 'mse': 0.3825300633907318, 'rmse': 0.6184901595115662} [2024-04-25 20:16:41] - test_results: {'mae': 0.42663687467575073, 'mape': 6.568150520324707, 'mse': 0.3622191846370697, 'rmse': 0.60184645652771} [2024-04-25 20:16:41] - loss no decreased for 5 epochs, early stopping .... [2024-04-25 20:16:45] - test_results: {'mae': 0.4280519187450409, 'mape': 6.596354007720947, 'mse': 0.3645992875099182, 'rmse': 0.6038205623626709}

wayne155 commented 4 days ago

配置: { "dataset_type": "ETTh1", "optm_type": "Adam", "model_type": "SCINet", "scaler_type": "StandarScaler", "loss_func_type": "mse", "batch_size": 32, "lr": 0.0003, "l2_weight_decay": 0.0005, "epochs": 100, "horizon": 1, "windows": 96, "pred_len": 96, "patience": 5, "max_grad_norm": 5.0, "invtrans_loss": false, "norm_type": "FAN", "norm_config": { "freq_topk": 8 }, "data_path": "./data", "device": "cuda:0", "num_worker": 20, "save_dir": "./results", "experiment_label": "1714045732", "hid_size": 1, "num_stacks": 1, "num_levels": 3, "num_decoder_layer": 1, "concat_len": 0, "groups": 1, "kernel": 5, "dropout": 0.5, "single_step_output_One": 0, "input_len_seg": 1, "positionalE": false, "modified": true, "RIN": false }

iuaku commented 4 days ago

好的,一般分类比列就是7;1;2,你那边有空的话也可以试试,训练脚本目前没有找到带参数的只能按readme里的测试的

iuaku commented 4 days ago

我明天试试你这个参数,

wayne155 commented 4 days ago

哦哦,你可以自己写个bash脚本实现一下,直接在python代码上加 --hid_size 之类的就行,这里代码好多都是我自己实现的。。当时强行想用 fire这个库省的写cli,然后就一直沿用了

iuaku commented 4 days ago

好的,有问题我按论文中的邮箱联系您

wayne155 commented 4 days ago

好的,一般分类比列就是7;1;2,你那边有空的话也可以试试,训练脚本目前没有找到带参数的只能按readme里的测试的

好的,因为当时我做时序是21年的时候了,当时一些论文多步都是按照7:2:1划分的,比如DCRNN,MTGNN。中间处理别的事情没做时序了,但是代码一直没改。感谢指出哈

wayne155 commented 4 days ago
image

MTGNN

wayne155 commented 4 days ago

好的,有问题我按论文中的邮箱联系您

好的可以的