LibCity / Bigscity-LibCity

LibCity: An Open Library for Urban Spatial-temporal Data Mining
https://libcity.ai/
Apache License 2.0
886 stars 163 forks source link

你好 运行DCRNN时为什么填GRU 是文档错误吗 #351

Closed OvOYu closed 1 year ago

OvOYu commented 1 year ago

image

aptx1231 commented 1 year ago

是的,感谢指出!

OvOYu commented 1 year ago

你好,ASTGCN和ASTGCNCommon有什么区别呢

OvOYu commented 1 year ago

就是我看到有些模型后面带有common 同一个模型的两个版本有区别吗

aptx1231 commented 1 year ago

common是因为原模型(比如stresnet)需要输入3段数据,即前一小时,一天前的同一时段,一周前的同一时段。 common就是改造了一下,保证只输入比如前一小时的数据,实现公平对比。

OvOYu commented 1 year ago

谢谢您的快速回复!

OvOYu commented 1 year ago

我想问一个可能有点基础的问题:请问下如何执行test呢?我执行完run后发现会生成每个epoch的tar文件 这些文件有什么用吗

OvOYu commented 1 year ago

2023-08-09 15:56:18,944 - INFO - Epoch [0/100] train_loss: 14465.6103, val_loss: 10248.0996, lr: 0.000100, 103.64s 2023-08-09 15:56:19,001 - INFO - Saved model at 0 2023-08-09 15:56:19,001 - INFO - Val loss decrease from inf to 10248.0996, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch0.tar 2023-08-09 15:57:41,330 - INFO - epoch complete! 2023-08-09 15:57:41,330 - INFO - evaluating now! 2023-08-09 15:57:58,344 - INFO - Epoch [1/100] train_loss: 6814.1271, val_loss: 4005.9501, lr: 0.000100, 99.34s 2023-08-09 15:57:58,401 - INFO - Saved model at 1 2023-08-09 15:57:58,401 - INFO - Val loss decrease from 10248.0996 to 4005.9501, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch1.tar 2023-08-09 15:59:23,319 - INFO - epoch complete! 2023-08-09 15:59:23,319 - INFO - evaluating now! 2023-08-09 15:59:40,002 - INFO - Epoch [2/100] train_loss: 2508.9717, val_loss: 1423.6543, lr: 0.000100, 101.60s 2023-08-09 15:59:40,062 - INFO - Saved model at 2 2023-08-09 15:59:40,062 - INFO - Val loss decrease from 4005.9501 to 1423.6543, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch2.tar 2023-08-09 16:01:02,627 - INFO - epoch complete! 2023-08-09 16:01:02,627 - INFO - evaluating now! 2023-08-09 16:01:19,349 - INFO - Epoch [3/100] train_loss: 1175.5264, val_loss: 895.1244, lr: 0.000100, 99.29s 2023-08-09 16:01:19,430 - INFO - Saved model at 3 2023-08-09 16:01:19,431 - INFO - Val loss decrease from 1423.6543 to 895.1244, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch3.tar 2023-08-09 16:02:41,749 - INFO - epoch complete! 2023-08-09 16:02:41,749 - INFO - evaluating now! 2023-08-09 16:02:59,114 - INFO - Epoch [4/100] train_loss: 865.0875, val_loss: 724.9629, lr: 0.000100, 99.68s 2023-08-09 16:02:59,173 - INFO - Saved model at 4 2023-08-09 16:02:59,173 - INFO - Val loss decrease from 895.1244 to 724.9629, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch4.tar 2023-08-09 16:04:23,159 - INFO - epoch complete! 2023-08-09 16:04:23,159 - INFO - evaluating now! 2023-08-09 16:04:39,620 - INFO - Epoch [5/100] train_loss: 736.2818, val_loss: 649.4947, lr: 0.000100, 100.45s 2023-08-09 16:04:39,677 - INFO - Saved model at 5 2023-08-09 16:04:39,677 - INFO - Val loss decrease from 724.9629 to 649.4947, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch5.tar 2023-08-09 16:06:02,098 - INFO - epoch complete! 2023-08-09 16:06:02,099 - INFO - evaluating now! 2023-08-09 16:06:18,610 - INFO - Epoch [6/100] train_loss: 665.1679, val_loss: 609.9767, lr: 0.000100, 98.93s 2023-08-09 16:06:18,666 - INFO - Saved model at 6 2023-08-09 16:06:18,666 - INFO - Val loss decrease from 649.4947 to 609.9767, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch6.tar 2023-08-09 16:07:42,448 - INFO - epoch complete! 2023-08-09 16:07:42,449 - INFO - evaluating now! 2023-08-09 16:07:59,409 - INFO - Epoch [7/100] train_loss: 617.5797, val_loss: 582.5920, lr: 0.000100, 100.74s 2023-08-09 16:07:59,468 - INFO - Saved model at 7 2023-08-09 16:07:59,468 - INFO - Val loss decrease from 609.9767 to 582.5920, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch7.tar 2023-08-09 16:09:24,398 - INFO - epoch complete! 2023-08-09 16:09:24,398 - INFO - evaluating now! 2023-08-09 16:09:41,227 - INFO - Epoch [8/100] train_loss: 581.6806, val_loss: 564.7435, lr: 0.000100, 101.76s 2023-08-09 16:09:41,285 - INFO - Saved model at 8 2023-08-09 16:09:41,285 - INFO - Val loss decrease from 582.5920 to 564.7435, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch8.tar 2023-08-09 16:11:03,567 - INFO - epoch complete! 2023-08-09 16:11:03,567 - INFO - evaluating now! 2023-08-09 16:11:19,922 - INFO - Epoch [9/100] train_loss: 553.8725, val_loss: 553.3213, lr: 0.000100, 98.64s 2023-08-09 16:11:19,980 - INFO - Saved model at 9 2023-08-09 16:11:19,980 - INFO - Val loss decrease from 564.7435 to 553.3213, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch9.tar 2023-08-09 16:12:43,582 - INFO - epoch complete! 2023-08-09 16:12:43,583 - INFO - evaluating now! 2023-08-09 16:13:00,733 - INFO - Epoch [10/100] train_loss: 528.9948, val_loss: 545.7058, lr: 0.000100, 100.75s 2023-08-09 16:13:00,790 - INFO - Saved model at 10 2023-08-09 16:13:00,790 - INFO - Val loss decrease from 553.3213 to 545.7058, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch10.tar 2023-08-09 16:14:24,520 - INFO - epoch complete! 2023-08-09 16:14:24,520 - INFO - evaluating now! 2023-08-09 16:14:41,221 - INFO - Epoch [11/100] train_loss: 508.3822, val_loss: 536.9953, lr: 0.000100, 100.43s 2023-08-09 16:14:41,278 - INFO - Saved model at 11 2023-08-09 16:14:41,278 - INFO - Val loss decrease from 545.7058 to 536.9953, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch11.tar 2023-08-09 16:16:03,973 - INFO - epoch complete! 2023-08-09 16:16:03,974 - INFO - evaluating now! 2023-08-09 16:16:20,502 - INFO - Epoch [12/100] train_loss: 491.6843, val_loss: 524.7531, lr: 0.000100, 99.22s 2023-08-09 16:16:20,557 - INFO - Saved model at 12 2023-08-09 16:16:20,557 - INFO - Val loss decrease from 536.9953 to 524.7531, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch12.tar 2023-08-09 16:17:45,695 - INFO - epoch complete! 2023-08-09 16:17:45,695 - INFO - evaluating now! 2023-08-09 16:18:02,270 - INFO - Epoch [13/100] train_loss: 477.9190, val_loss: 522.7174, lr: 0.000100, 101.71s 2023-08-09 16:18:02,329 - INFO - Saved model at 13 2023-08-09 16:18:02,329 - INFO - Val loss decrease from 524.7531 to 522.7174, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch13.tar 2023-08-09 16:19:26,460 - INFO - epoch complete! 2023-08-09 16:19:26,460 - INFO - evaluating now! 2023-08-09 16:19:42,863 - INFO - Epoch [14/100] train_loss: 465.9773, val_loss: 514.4992, lr: 0.000100, 100.53s 2023-08-09 16:19:42,920 - INFO - Saved model at 14 2023-08-09 16:19:42,921 - INFO - Val loss decrease from 522.7174 to 514.4992, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch14.tar 2023-08-09 16:21:04,820 - INFO - epoch complete! 2023-08-09 16:21:04,820 - INFO - evaluating now! 2023-08-09 16:21:21,463 - INFO - Epoch [15/100] train_loss: 455.4796, val_loss: 510.5923, lr: 0.000100, 98.54s 2023-08-09 16:21:21,521 - INFO - Saved model at 15 2023-08-09 16:21:21,522 - INFO - Val loss decrease from 514.4992 to 510.5923, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch15.tar 2023-08-09 16:22:45,701 - INFO - epoch complete! 2023-08-09 16:22:45,701 - INFO - evaluating now! 2023-08-09 16:23:02,385 - INFO - Epoch [16/100] train_loss: 445.7381, val_loss: 504.4973, lr: 0.000100, 100.86s 2023-08-09 16:23:02,446 - INFO - Saved model at 16 2023-08-09 16:23:02,446 - INFO - Val loss decrease from 510.5923 to 504.4973, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch16.tar 2023-08-09 16:24:24,996 - INFO - epoch complete! 2023-08-09 16:24:24,996 - INFO - evaluating now! 2023-08-09 16:24:41,778 - INFO - Epoch [17/100] train_loss: 437.7733, val_loss: 496.1740, lr: 0.000100, 99.33s 2023-08-09 16:24:41,835 - INFO - Saved model at 17 2023-08-09 16:24:41,835 - INFO - Val loss decrease from 504.4973 to 496.1740, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch17.tar 2023-08-09 16:26:04,883 - INFO - epoch complete! 2023-08-09 16:26:04,883 - INFO - evaluating now! 2023-08-09 16:26:22,021 - INFO - Epoch [18/100] train_loss: 429.0499, val_loss: 496.4909, lr: 0.000100, 100.19s 2023-08-09 16:27:47,263 - INFO - epoch complete! 2023-08-09 16:27:47,263 - INFO - evaluating now! 2023-08-09 16:28:03,875 - INFO - Epoch [19/100] train_loss: 422.4816, val_loss: 486.4580, lr: 0.000100, 101.85s 2023-08-09 16:28:03,933 - INFO - Saved model at 19 2023-08-09 16:28:03,933 - INFO - Val loss decrease from 496.1740 to 486.4580, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch19.tar 2023-08-09 16:29:26,639 - INFO - epoch complete! 2023-08-09 16:29:26,640 - INFO - evaluating now! 2023-08-09 16:29:43,152 - INFO - Epoch [20/100] train_loss: 415.6089, val_loss: 480.7355, lr: 0.000100, 99.22s 2023-08-09 16:29:43,209 - INFO - Saved model at 20 2023-08-09 16:29:43,209 - INFO - Val loss decrease from 486.4580 to 480.7355, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch20.tar 2023-08-09 16:31:06,466 - INFO - epoch complete! 2023-08-09 16:31:06,466 - INFO - evaluating now! 2023-08-09 16:31:23,045 - INFO - Epoch [21/100] train_loss: 408.9518, val_loss: 473.4477, lr: 0.000100, 99.84s 2023-08-09 16:31:23,102 - INFO - Saved model at 21 2023-08-09 16:31:23,103 - INFO - Val loss decrease from 480.7355 to 473.4477, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch21.tar 2023-08-09 16:32:47,617 - INFO - epoch complete! 2023-08-09 16:32:47,617 - INFO - evaluating now! 2023-08-09 16:33:04,503 - INFO - Epoch [22/100] train_loss: 403.1863, val_loss: 471.0713, lr: 0.000100, 101.40s 2023-08-09 16:33:04,565 - INFO - Saved model at 22 2023-08-09 16:33:04,565 - INFO - Val loss decrease from 473.4477 to 471.0713, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch22.tar 2023-08-09 16:34:26,573 - INFO - epoch complete! 2023-08-09 16:34:26,574 - INFO - evaluating now! 2023-08-09 16:34:43,137 - INFO - Epoch [23/100] train_loss: 398.0524, val_loss: 466.4490, lr: 0.000100, 98.57s 2023-08-09 16:34:43,202 - INFO - Saved model at 23 2023-08-09 16:34:43,202 - INFO - Val loss decrease from 471.0713 to 466.4490, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch23.tar 2023-08-09 16:36:06,874 - INFO - epoch complete! 2023-08-09 16:36:06,875 - INFO - evaluating now! 2023-08-09 16:36:23,645 - INFO - Epoch [24/100] train_loss: 393.3629, val_loss: 461.0356, lr: 0.000100, 100.44s 2023-08-09 16:36:23,704 - INFO - Saved model at 24 2023-08-09 16:36:23,704 - INFO - Val loss decrease from 466.4490 to 461.0356, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch24.tar 2023-08-09 16:37:48,414 - INFO - epoch complete! 2023-08-09 16:37:48,414 - INFO - evaluating now! 2023-08-09 16:38:04,839 - INFO - Epoch [25/100] train_loss: 388.6540, val_loss: 458.8537, lr: 0.000100, 101.14s 2023-08-09 16:38:04,897 - INFO - Saved model at 25 2023-08-09 16:38:04,898 - INFO - Val loss decrease from 461.0356 to 458.8537, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch25.tar 2023-08-09 16:39:27,455 - INFO - epoch complete! 2023-08-09 16:39:27,455 - INFO - evaluating now! 2023-08-09 16:39:43,986 - INFO - Epoch [26/100] train_loss: 384.2978, val_loss: 457.4638, lr: 0.000100, 99.09s 2023-08-09 16:39:44,042 - INFO - Saved model at 26 2023-08-09 16:39:44,042 - INFO - Val loss decrease from 458.8537 to 457.4638, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch26.tar 2023-08-09 16:41:09,163 - INFO - epoch complete! 2023-08-09 16:41:09,164 - INFO - evaluating now! 2023-08-09 16:41:25,946 - INFO - Epoch [27/100] train_loss: 380.4705, val_loss: 452.5569, lr: 0.000100, 101.90s 2023-08-09 16:41:26,004 - INFO - Saved model at 27 2023-08-09 16:41:26,005 - INFO - Val loss decrease from 457.4638 to 452.5569, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch27.tar 2023-08-09 16:42:49,370 - INFO - epoch complete! 2023-08-09 16:42:49,371 - INFO - evaluating now! 2023-08-09 16:43:05,992 - INFO - Epoch [28/100] train_loss: 376.6974, val_loss: 447.9182, lr: 0.000100, 99.99s 2023-08-09 16:43:06,051 - INFO - Saved model at 28 2023-08-09 16:43:06,051 - INFO - Val loss decrease from 452.5569 to 447.9182, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch28.tar 2023-08-09 16:44:28,209 - INFO - epoch complete! 2023-08-09 16:44:28,210 - INFO - evaluating now! 2023-08-09 16:44:45,033 - INFO - Epoch [29/100] train_loss: 373.3920, val_loss: 443.4390, lr: 0.000100, 98.98s 2023-08-09 16:44:45,092 - INFO - Saved model at 29 2023-08-09 16:44:45,092 - INFO - Val loss decrease from 447.9182 to 443.4390, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch29.tar 2023-08-09 16:46:09,252 - INFO - epoch complete! 2023-08-09 16:46:09,252 - INFO - evaluating now! 2023-08-09 16:46:26,459 - INFO - Epoch [30/100] train_loss: 370.0457, val_loss: 442.4330, lr: 0.000100, 101.37s 2023-08-09 16:46:26,523 - INFO - Saved model at 30 2023-08-09 16:46:26,523 - INFO - Val loss decrease from 443.4390 to 442.4330, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch30.tar 2023-08-09 16:47:49,394 - INFO - epoch complete! 2023-08-09 16:47:49,394 - INFO - evaluating now! 2023-08-09 16:48:05,906 - INFO - Epoch [31/100] train_loss: 366.8464, val_loss: 436.8745, lr: 0.000100, 99.38s 2023-08-09 16:48:05,964 - INFO - Saved model at 31 2023-08-09 16:48:05,964 - INFO - Val loss decrease from 442.4330 to 436.8745, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch31.tar 2023-08-09 16:49:29,383 - INFO - epoch complete! 2023-08-09 16:49:29,383 - INFO - evaluating now! 2023-08-09 16:49:46,276 - INFO - Epoch [32/100] train_loss: 364.0575, val_loss: 435.8164, lr: 0.000100, 100.31s 2023-08-09 16:49:46,334 - INFO - Saved model at 32 2023-08-09 16:49:46,335 - INFO - Val loss decrease from 436.8745 to 435.8164, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch32.tar 2023-08-09 16:51:11,435 - INFO - epoch complete! 2023-08-09 16:51:11,435 - INFO - evaluating now! 2023-08-09 16:51:28,070 - INFO - Epoch [33/100] train_loss: 361.2620, val_loss: 432.2225, lr: 0.000100, 101.74s 2023-08-09 16:51:28,128 - INFO - Saved model at 33 2023-08-09 16:51:28,128 - INFO - Val loss decrease from 435.8164 to 432.2225, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch33.tar 2023-08-09 16:52:49,880 - INFO - epoch complete! 2023-08-09 16:52:49,880 - INFO - evaluating now! 2023-08-09 16:53:06,208 - INFO - Epoch [34/100] train_loss: 359.0331, val_loss: 429.9115, lr: 0.000100, 98.08s 2023-08-09 16:53:06,266 - INFO - Saved model at 34 2023-08-09 16:53:06,267 - INFO - Val loss decrease from 432.2225 to 429.9115, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch34.tar 2023-08-09 16:54:29,374 - INFO - epoch complete! 2023-08-09 16:54:29,374 - INFO - evaluating now! 2023-08-09 16:54:46,076 - INFO - Epoch [35/100] train_loss: 357.4683, val_loss: 429.0914, lr: 0.000100, 99.81s 2023-08-09 16:54:46,134 - INFO - Saved model at 35 2023-08-09 16:54:46,134 - INFO - Val loss decrease from 429.9115 to 429.0914, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch35.tar 2023-08-09 16:56:09,274 - INFO - epoch complete! 2023-08-09 16:56:09,274 - INFO - evaluating now! 2023-08-09 16:56:25,861 - INFO - Epoch [36/100] train_loss: 354.7035, val_loss: 426.1112, lr: 0.000100, 99.73s 2023-08-09 16:56:25,920 - INFO - Saved model at 36 2023-08-09 16:56:25,920 - INFO - Val loss decrease from 429.0914 to 426.1112, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch36.tar 2023-08-09 16:57:48,279 - INFO - epoch complete! 2023-08-09 16:57:48,280 - INFO - evaluating now! 2023-08-09 16:58:05,159 - INFO - Epoch [37/100] train_loss: 352.5587, val_loss: 423.6635, lr: 0.000100, 99.24s 2023-08-09 16:58:05,219 - INFO - Saved model at 37 2023-08-09 16:58:05,219 - INFO - Val loss decrease from 426.1112 to 423.6635, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch37.tar

OvOYu commented 1 year ago

这是我训练ASTGCN的日志,这个loss是不太高了 请问下您哪里可能出现问题了 谢谢

aptx1231 commented 1 year ago

train完了自动就test了。tar就是每一轮的模型,不用管

aptx1231 commented 1 year ago

--train false --exp_id (你train的时候产生的exp-id) 这样命令行配置一下也可以test

aptx1231 commented 1 year ago

发一下数据处理部分的log,应该是没有筛选特征,pemsd4有速度,有流量,要在数据集的raw_data/pemsd4/config.json里配置一下,data_col和outputdim,前者改成你要用的特征是速度还是流量,后者改成1.

OvOYu commented 1 year ago

2023-08-09 15:47:53,225 - INFO - Log directory: ./libcity/log 2023-08-09 15:47:53,226 - INFO - Begin pipeline, task=traffic_state_pred, model_name=ASTGCN, dataset_name=PEMSD4, exp_id=8834 2023-08-09 15:47:53,226 - INFO - {'task': 'traffic_state_pred', 'model': 'ASTGCN', 'dataset': 'PEMSD4', 'saved_model': True, 'train': True, 'seed': 0, 'dataset_class': 'ASTGCNDataset', 'executor': 'TrafficStateExecutor', 'evaluator': 'TrafficStateEvaluator', 'nb_block': 2, 'K': 3, 'nb_chev_filter': 64, 'nb_time_filter': 64, 'scaler': 'standard', 'load_external': False, 'normal_external': False, 'ext_scaler': 'none', 'add_time_in_day': False, 'add_day_in_week': False, 'train_rate': 0.6, 'eval_rate': 0.2, 'max_epoch': 100, 'learner': 'adam', 'learning_rate': 0.0001, 'lr_decay': False, 'clip_grad_norm': False, 'use_early_stop': False, 'batch_size': 64, 'cache_dataset': True, 'num_workers': 0, 'pad_with_last_sample': True, 'input_window': 12, 'output_window': 12, 'len_closeness': 2, 'len_period': 1, 'len_trend': 2, 'interval_period': 1, 'interval_trend': 7, 'gpu': True, 'gpu_id': 0, 'train_loss': 'none', 'epoch': 0, 'weight_decay': 0, 'lr_epsilon': 1e-08, 'lr_beta1': 0.9, 'lr_beta2': 0.999, 'lr_alpha': 0.99, 'lr_momentum': 0, 'lr_scheduler': 'multisteplr', 'lr_decay_ratio': 0.1, 'steps': [5, 20, 40, 70], 'step_size': 10, 'lr_T_max': 30, 'lr_eta_min': 0, 'lr_patience': 10, 'lr_threshold': 0.0001, 'max_grad_norm': 1.0, 'patience': 50, 'log_level': 'INFO', 'log_every': 1, 'load_best_epoch': True, 'hyper_tune': False, 'metrics': ['MAE', 'MAPE', 'MSE', 'RMSE', 'masked_MAE', 'masked_MAPE', 'masked_MSE', 'masked_RMSE', 'R2', 'EVAR'], 'evaluator_mode': 'single', 'save_mode': ['csv'], 'geo': {'including_types': ['Point'], 'Point': {}}, 'rel': {'including_types': ['geo'], 'geo': {'cost': 'num'}}, 'dyna': {'including_types': ['state'], 'state': {'entity_id': 'geo_id', 'traffic_flow': 'num', 'traffic_occupancy': 'num', 'traffic_speed': 'num'}}, 'data_col': ['traffic_flow', 'traffic_occupancy', 'traffic_speed'], 'weight_col': 'cost', 'data_files': ['PEMSD4'], 'geo_file': 'PEMSD4', 'rel_file': 'PEMSD4', 'output_dim': 3, 'time_intervals': 300, 'init_weight_inf_or_zero': 'zero', 'set_weight_link_or_dist': 'link', 'calculate_weight_adj': False, 'weight_adj_epsilon': 0.1, 'device': device(type='cuda', index=0), 'exp_id': 8834} 2023-08-09 15:47:53,858 - INFO - Loaded file PEMSD4.geo, num_nodes=307 2023-08-09 15:47:53,861 - INFO - set_weight_link_or_dist: link 2023-08-09 15:47:53,861 - INFO - init_weight_inf_or_zero: zero 2023-08-09 15:47:53,863 - INFO - Loaded file PEMSD4.rel, shape=(307, 307) 2023-08-09 15:47:53,864 - INFO - Loading file PEMSD4.dyna 2023-08-09 15:47:57,590 - INFO - Loaded file PEMSD4.dyna, shape=(16992, 307, 3) 2023-08-09 15:48:02,600 - INFO - closeness: (12949, 24, 307, 3) 2023-08-09 15:48:07,582 - INFO - period: (12949, 12, 307, 3) 2023-08-09 15:48:11,626 - INFO - trend: (12949, 24, 307, 3) 2023-08-09 15:48:16,655 - INFO - Dataset created 2023-08-09 15:48:16,655 - INFO - x shape: (12949, 60, 307, 3), y shape: (12949, 12, 307, 3) 2023-08-09 15:48:16,882 - INFO - train x: (7769, 60, 307, 3), y: (7769, 12, 307, 3) 2023-08-09 15:48:16,882 - INFO - eval x: (2590, 60, 307, 3), y: (2590, 12, 307, 3) 2023-08-09 15:48:16,882 - INFO - test x: (2590, 60, 307, 3), y: (2590, 12, 307, 3) 2023-08-09 15:54:23,905 - INFO - Saved at ./libcity/cache/dataset_cache/point_based_PEMSD4_2_1_2_1_7_12_0.6_0.2_standard_64_False_False_True.npz 2023-08-09 15:54:26,986 - INFO - StandardScaler mean: 91.72662831592437, std: 127.5481471486698 2023-08-09 15:54:26,987 - INFO - NoneScaler 2023-08-09 15:54:35,246 - INFO - ASTGCN( (hours_ASTGCN_submodule): ASTGCNSubmodule( (BlockList): ModuleList( (0): ASTGCNBlock( (TAt): TemporalAttentionLayer() (SAt): SpatialAttentionLayer() (cheb_conv_SAt): ChebConvWithSAt( (Theta): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] ) ) (time_conv): Conv2d(64, 64, kernel_size=(1, 3), stride=(1, 2), padding=(0, 1)) (residual_conv): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 2)) (ln): LayerNorm((64,), eps=1e-05, elementwise_affine=True) ) (1): ASTGCNBlock( (TAt): TemporalAttentionLayer() (SAt): SpatialAttentionLayer() (cheb_conv_SAt): ChebConvWithSAt( (Theta): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] ) ) (time_conv): Conv2d(64, 64, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1)) (residual_conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1)) (ln): LayerNorm((64,), eps=1e-05, elementwise_affine=True) ) ) (final_conv): Conv2d(12, 12, kernel_size=(1, 62), stride=(1, 1)) (fusionlayer): FusionLayer() ) (days_ASTGCN_submodule): ASTGCNSubmodule( (BlockList): ModuleList( (0): ASTGCNBlock( (TAt): TemporalAttentionLayer() (SAt): SpatialAttentionLayer() (cheb_conv_SAt): ChebConvWithSAt( (Theta): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] ) ) (time_conv): Conv2d(64, 64, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1)) (residual_conv): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1)) (ln): LayerNorm((64,), eps=1e-05, elementwise_affine=True) ) (1): ASTGCNBlock( (TAt): TemporalAttentionLayer() (SAt): SpatialAttentionLayer() (cheb_conv_SAt): ChebConvWithSAt( (Theta): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] ) ) (time_conv): Conv2d(64, 64, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1)) (residual_conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1)) (ln): LayerNorm((64,), eps=1e-05, elementwise_affine=True) ) ) (final_conv): Conv2d(12, 12, kernel_size=(1, 62), stride=(1, 1)) (fusionlayer): FusionLayer() ) (weeks_ASTGCN_submodule): ASTGCNSubmodule( (BlockList): ModuleList( (0): ASTGCNBlock( (TAt): TemporalAttentionLayer() (SAt): SpatialAttentionLayer() (cheb_conv_SAt): ChebConvWithSAt( (Theta): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 3x64 (GPU 0)] ) ) (time_conv): Conv2d(64, 64, kernel_size=(1, 3), stride=(1, 2), padding=(0, 1)) (residual_conv): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 2)) (ln): LayerNorm((64,), eps=1e-05, elementwise_affine=True) ) (1): ASTGCNBlock( (TAt): TemporalAttentionLayer() (SAt): SpatialAttentionLayer() (cheb_conv_SAt): ChebConvWithSAt( (Theta): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 64x64 (GPU 0)] ) ) (time_conv): Conv2d(64, 64, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1)) (residual_conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1)) (ln): LayerNorm((64,), eps=1e-05, elementwise_affine=True) ) ) (final_conv): Conv2d(12, 12, kernel_size=(1, 62), stride=(1, 1)) (fusionlayer): FusionLayer() ) ) 2023-08-09 15:54:35,252 - INFO - hours_ASTGCN_submodule.BlockList.0.TAt.U1 torch.Size([307]) cuda:0 True 2023-08-09 15:54:35,252 - INFO - hours_ASTGCN_submodule.BlockList.0.TAt.U2 torch.Size([3, 307]) cuda:0 True 2023-08-09 15:54:35,252 - INFO - hours_ASTGCN_submodule.BlockList.0.TAt.U3 torch.Size([3]) cuda:0 True 2023-08-09 15:54:35,252 - INFO - hours_ASTGCN_submodule.BlockList.0.TAt.be torch.Size([1, 24, 24]) cuda:0 True 2023-08-09 15:54:35,252 - INFO - hours_ASTGCN_submodule.BlockList.0.TAt.Ve torch.Size([24, 24]) cuda:0 True 2023-08-09 15:54:35,252 - INFO - hours_ASTGCN_submodule.BlockList.0.SAt.W1 torch.Size([24]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.SAt.W2 torch.Size([3, 24]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.SAt.W3 torch.Size([3]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.SAt.bs torch.Size([1, 307, 307]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.SAt.Vs torch.Size([307, 307]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.0 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.1 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.2 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.time_conv.weight torch.Size([64, 64, 1, 3]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.time_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,253 - INFO - hours_ASTGCN_submodule.BlockList.0.residual_conv.weight torch.Size([64, 3, 1, 1]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.0.residual_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.0.ln.weight torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.0.ln.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.TAt.U1 torch.Size([307]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.TAt.U2 torch.Size([64, 307]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.TAt.U3 torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.TAt.be torch.Size([1, 12, 12]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.TAt.Ve torch.Size([12, 12]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.SAt.W1 torch.Size([12]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.SAt.W2 torch.Size([64, 12]) cuda:0 True 2023-08-09 15:54:35,254 - INFO - hours_ASTGCN_submodule.BlockList.1.SAt.W3 torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.SAt.bs torch.Size([1, 307, 307]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.SAt.Vs torch.Size([307, 307]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.0 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.1 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.2 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.time_conv.weight torch.Size([64, 64, 1, 3]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.time_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.residual_conv.weight torch.Size([64, 64, 1, 1]) cuda:0 True 2023-08-09 15:54:35,255 - INFO - hours_ASTGCN_submodule.BlockList.1.residual_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,256 - INFO - hours_ASTGCN_submodule.BlockList.1.ln.weight torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,267 - INFO - hours_ASTGCN_submodule.BlockList.1.ln.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,267 - INFO - hours_ASTGCN_submodule.final_conv.weight torch.Size([12, 12, 1, 62]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - hours_ASTGCN_submodule.final_conv.bias torch.Size([12]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - hours_ASTGCN_submodule.fusionlayer.weights torch.Size([1, 12, 307, 3]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.TAt.U1 torch.Size([307]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.TAt.U2 torch.Size([3, 307]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.TAt.U3 torch.Size([3]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.TAt.be torch.Size([1, 12, 12]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.TAt.Ve torch.Size([12, 12]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.SAt.W1 torch.Size([12]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.SAt.W2 torch.Size([3, 12]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.SAt.W3 torch.Size([3]) cuda:0 True 2023-08-09 15:54:35,268 - INFO - days_ASTGCN_submodule.BlockList.0.SAt.bs torch.Size([1, 307, 307]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.SAt.Vs torch.Size([307, 307]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.0 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.1 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.2 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.time_conv.weight torch.Size([64, 64, 1, 3]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.time_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.residual_conv.weight torch.Size([64, 3, 1, 1]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.residual_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.ln.weight torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.0.ln.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.1.TAt.U1 torch.Size([307]) cuda:0 True 2023-08-09 15:54:35,269 - INFO - days_ASTGCN_submodule.BlockList.1.TAt.U2 torch.Size([64, 307]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.TAt.U3 torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.TAt.be torch.Size([1, 12, 12]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.TAt.Ve torch.Size([12, 12]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.SAt.W1 torch.Size([12]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.SAt.W2 torch.Size([64, 12]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.SAt.W3 torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.SAt.bs torch.Size([1, 307, 307]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.SAt.Vs torch.Size([307, 307]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.0 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.1 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,270 - INFO - days_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.2 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,271 - INFO - days_ASTGCN_submodule.BlockList.1.time_conv.weight torch.Size([64, 64, 1, 3]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.BlockList.1.time_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.BlockList.1.residual_conv.weight torch.Size([64, 64, 1, 1]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.BlockList.1.residual_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.BlockList.1.ln.weight torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.BlockList.1.ln.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.final_conv.weight torch.Size([12, 12, 1, 62]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.final_conv.bias torch.Size([12]) cuda:0 True 2023-08-09 15:54:35,283 - INFO - days_ASTGCN_submodule.fusionlayer.weights torch.Size([1, 12, 307, 3]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.TAt.U1 torch.Size([307]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.TAt.U2 torch.Size([3, 307]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.TAt.U3 torch.Size([3]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.TAt.be torch.Size([1, 24, 24]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.TAt.Ve torch.Size([24, 24]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.SAt.W1 torch.Size([24]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.SAt.W2 torch.Size([3, 24]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.SAt.W3 torch.Size([3]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.SAt.bs torch.Size([1, 307, 307]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.SAt.Vs torch.Size([307, 307]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.0 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,284 - INFO - weeks_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.1 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.0.cheb_conv_SAt.Theta.2 torch.Size([3, 64]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.0.time_conv.weight torch.Size([64, 64, 1, 3]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.0.time_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.0.residual_conv.weight torch.Size([64, 3, 1, 1]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.0.residual_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.0.ln.weight torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.0.ln.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.1.TAt.U1 torch.Size([307]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.1.TAt.U2 torch.Size([64, 307]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.1.TAt.U3 torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,285 - INFO - weeks_ASTGCN_submodule.BlockList.1.TAt.be torch.Size([1, 12, 12]) cuda:0 True 2023-08-09 15:54:35,286 - INFO - weeks_ASTGCN_submodule.BlockList.1.TAt.Ve torch.Size([12, 12]) cuda:0 True 2023-08-09 15:54:35,286 - INFO - weeks_ASTGCN_submodule.BlockList.1.SAt.W1 torch.Size([12]) cuda:0 True 2023-08-09 15:54:35,286 - INFO - weeks_ASTGCN_submodule.BlockList.1.SAt.W2 torch.Size([64, 12]) cuda:0 True 2023-08-09 15:54:35,286 - INFO - weeks_ASTGCN_submodule.BlockList.1.SAt.W3 torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,286 - INFO - weeks_ASTGCN_submodule.BlockList.1.SAt.bs torch.Size([1, 307, 307]) cuda:0 True 2023-08-09 15:54:35,286 - INFO - weeks_ASTGCN_submodule.BlockList.1.SAt.Vs torch.Size([307, 307]) cuda:0 True 2023-08-09 15:54:35,298 - INFO - weeks_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.0 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,298 - INFO - weeks_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.1 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.BlockList.1.cheb_conv_SAt.Theta.2 torch.Size([64, 64]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.BlockList.1.time_conv.weight torch.Size([64, 64, 1, 3]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.BlockList.1.time_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.BlockList.1.residual_conv.weight torch.Size([64, 64, 1, 1]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.BlockList.1.residual_conv.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.BlockList.1.ln.weight torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.BlockList.1.ln.bias torch.Size([64]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.final_conv.weight torch.Size([12, 12, 1, 62]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.final_conv.bias torch.Size([12]) cuda:0 True 2023-08-09 15:54:35,299 - INFO - weeks_ASTGCN_submodule.fusionlayer.weights torch.Size([1, 12, 307, 3]) cuda:0 True 2023-08-09 15:54:35,300 - INFO - Total parameter numbers: 1387671 2023-08-09 15:54:35,301 - INFO - You select adam optimizer. 2023-08-09 15:54:35,302 - WARNING - Received none train loss func and will use the loss func defined in the model. 2023-08-09 15:54:35,302 - INFO - Start training ... 2023-08-09 15:54:35,302 - INFO - num_batches:122 2023-08-09 15:56:02,265 - INFO - epoch complete! 2023-08-09 15:56:02,266 - INFO - evaluating now! 2023-08-09 15:56:18,944 - INFO - Epoch [0/100] train_loss: 14465.6103, val_loss: 10248.0996, lr: 0.000100, 103.64s 2023-08-09 15:56:19,001 - INFO - Saved model at 0 2023-08-09 15:56:19,001 - INFO - Val loss decrease from inf to 10248.0996, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch0.tar 2023-08-09 15:57:41,330 - INFO - epoch complete! 2023-08-09 15:57:41,330 - INFO - evaluating now! 2023-08-09 15:57:58,344 - INFO - Epoch [1/100] train_loss: 6814.1271, val_loss: 4005.9501, lr: 0.000100, 99.34s 2023-08-09 15:57:58,401 - INFO - Saved model at 1 2023-08-09 15:57:58,401 - INFO - Val loss decrease from 10248.0996 to 4005.9501, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch1.tar 2023-08-09 15:59:23,319 - INFO - epoch complete! 2023-08-09 15:59:23,319 - INFO - evaluating now! 2023-08-09 15:59:40,002 - INFO - Epoch [2/100] train_loss: 2508.9717, val_loss: 1423.6543, lr: 0.000100, 101.60s 2023-08-09 15:59:40,062 - INFO - Saved model at 2 2023-08-09 15:59:40,062 - INFO - Val loss decrease from 4005.9501 to 1423.6543, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch2.tar 2023-08-09 16:01:02,627 - INFO - epoch complete! 2023-08-09 16:01:02,627 - INFO - evaluating now! 2023-08-09 16:01:19,349 - INFO - Epoch [3/100] train_loss: 1175.5264, val_loss: 895.1244, lr: 0.000100, 99.29s 2023-08-09 16:01:19,430 - INFO - Saved model at 3 2023-08-09 16:01:19,431 - INFO - Val loss decrease from 1423.6543 to 895.1244, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch3.tar 2023-08-09 16:02:41,749 - INFO - epoch complete! 2023-08-09 16:02:41,749 - INFO - evaluating now! 2023-08-09 16:02:59,114 - INFO - Epoch [4/100] train_loss: 865.0875, val_loss: 724.9629, lr: 0.000100, 99.68s 2023-08-09 16:02:59,173 - INFO - Saved model at 4 2023-08-09 16:02:59,173 - INFO - Val loss decrease from 895.1244 to 724.9629, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch4.tar 2023-08-09 16:04:23,159 - INFO - epoch complete! 2023-08-09 16:04:23,159 - INFO - evaluating now! 2023-08-09 16:04:39,620 - INFO - Epoch [5/100] train_loss: 736.2818, val_loss: 649.4947, lr: 0.000100, 100.45s 2023-08-09 16:04:39,677 - INFO - Saved model at 5 2023-08-09 16:04:39,677 - INFO - Val loss decrease from 724.9629 to 649.4947, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch5.tar 2023-08-09 16:06:02,098 - INFO - epoch complete! 2023-08-09 16:06:02,099 - INFO - evaluating now! 2023-08-09 16:06:18,610 - INFO - Epoch [6/100] train_loss: 665.1679, val_loss: 609.9767, lr: 0.000100, 98.93s 2023-08-09 16:06:18,666 - INFO - Saved model at 6 2023-08-09 16:06:18,666 - INFO - Val loss decrease from 649.4947 to 609.9767, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch6.tar 2023-08-09 16:07:42,448 - INFO - epoch complete! 2023-08-09 16:07:42,449 - INFO - evaluating now! 2023-08-09 16:07:59,409 - INFO - Epoch [7/100] train_loss: 617.5797, val_loss: 582.5920, lr: 0.000100, 100.74s 2023-08-09 16:07:59,468 - INFO - Saved model at 7 2023-08-09 16:07:59,468 - INFO - Val loss decrease from 609.9767 to 582.5920, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch7.tar 2023-08-09 16:09:24,398 - INFO - epoch complete! 2023-08-09 16:09:24,398 - INFO - evaluating now! 2023-08-09 16:09:41,227 - INFO - Epoch [8/100] train_loss: 581.6806, val_loss: 564.7435, lr: 0.000100, 101.76s 2023-08-09 16:09:41,285 - INFO - Saved model at 8 2023-08-09 16:09:41,285 - INFO - Val loss decrease from 582.5920 to 564.7435, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch8.tar 2023-08-09 16:11:03,567 - INFO - epoch complete! 2023-08-09 16:11:03,567 - INFO - evaluating now! 2023-08-09 16:11:19,922 - INFO - Epoch [9/100] train_loss: 553.8725, val_loss: 553.3213, lr: 0.000100, 98.64s 2023-08-09 16:11:19,980 - INFO - Saved model at 9 2023-08-09 16:11:19,980 - INFO - Val loss decrease from 564.7435 to 553.3213, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch9.tar 2023-08-09 16:12:43,582 - INFO - epoch complete! 2023-08-09 16:12:43,583 - INFO - evaluating now! 2023-08-09 16:13:00,733 - INFO - Epoch [10/100] train_loss: 528.9948, val_loss: 545.7058, lr: 0.000100, 100.75s 2023-08-09 16:13:00,790 - INFO - Saved model at 10 2023-08-09 16:13:00,790 - INFO - Val loss decrease from 553.3213 to 545.7058, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch10.tar 2023-08-09 16:14:24,520 - INFO - epoch complete! 2023-08-09 16:14:24,520 - INFO - evaluating now! 2023-08-09 16:14:41,221 - INFO - Epoch [11/100] train_loss: 508.3822, val_loss: 536.9953, lr: 0.000100, 100.43s 2023-08-09 16:14:41,278 - INFO - Saved model at 11 2023-08-09 16:14:41,278 - INFO - Val loss decrease from 545.7058 to 536.9953, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch11.tar 2023-08-09 16:16:03,973 - INFO - epoch complete! 2023-08-09 16:16:03,974 - INFO - evaluating now! 2023-08-09 16:16:20,502 - INFO - Epoch [12/100] train_loss: 491.6843, val_loss: 524.7531, lr: 0.000100, 99.22s 2023-08-09 16:16:20,557 - INFO - Saved model at 12 2023-08-09 16:16:20,557 - INFO - Val loss decrease from 536.9953 to 524.7531, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch12.tar 2023-08-09 16:17:45,695 - INFO - epoch complete! 2023-08-09 16:17:45,695 - INFO - evaluating now! 2023-08-09 16:18:02,270 - INFO - Epoch [13/100] train_loss: 477.9190, val_loss: 522.7174, lr: 0.000100, 101.71s 2023-08-09 16:18:02,329 - INFO - Saved model at 13 2023-08-09 16:18:02,329 - INFO - Val loss decrease from 524.7531 to 522.7174, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch13.tar 2023-08-09 16:19:26,460 - INFO - epoch complete! 2023-08-09 16:19:26,460 - INFO - evaluating now! 2023-08-09 16:19:42,863 - INFO - Epoch [14/100] train_loss: 465.9773, val_loss: 514.4992, lr: 0.000100, 100.53s 2023-08-09 16:19:42,920 - INFO - Saved model at 14 2023-08-09 16:19:42,921 - INFO - Val loss decrease from 522.7174 to 514.4992, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch14.tar 2023-08-09 16:21:04,820 - INFO - epoch complete! 2023-08-09 16:21:04,820 - INFO - evaluating now! 2023-08-09 16:21:21,463 - INFO - Epoch [15/100] train_loss: 455.4796, val_loss: 510.5923, lr: 0.000100, 98.54s 2023-08-09 16:21:21,521 - INFO - Saved model at 15 2023-08-09 16:21:21,522 - INFO - Val loss decrease from 514.4992 to 510.5923, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch15.tar 2023-08-09 16:22:45,701 - INFO - epoch complete! 2023-08-09 16:22:45,701 - INFO - evaluating now! 2023-08-09 16:23:02,385 - INFO - Epoch [16/100] train_loss: 445.7381, val_loss: 504.4973, lr: 0.000100, 100.86s 2023-08-09 16:23:02,446 - INFO - Saved model at 16 2023-08-09 16:23:02,446 - INFO - Val loss decrease from 510.5923 to 504.4973, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch16.tar 2023-08-09 16:24:24,996 - INFO - epoch complete! 2023-08-09 16:24:24,996 - INFO - evaluating now! 2023-08-09 16:24:41,778 - INFO - Epoch [17/100] train_loss: 437.7733, val_loss: 496.1740, lr: 0.000100, 99.33s 2023-08-09 16:24:41,835 - INFO - Saved model at 17 2023-08-09 16:24:41,835 - INFO - Val loss decrease from 504.4973 to 496.1740, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch17.tar 2023-08-09 16:26:04,883 - INFO - epoch complete! 2023-08-09 16:26:04,883 - INFO - evaluating now! 2023-08-09 16:26:22,021 - INFO - Epoch [18/100] train_loss: 429.0499, val_loss: 496.4909, lr: 0.000100, 100.19s 2023-08-09 16:27:47,263 - INFO - epoch complete! 2023-08-09 16:27:47,263 - INFO - evaluating now! 2023-08-09 16:28:03,875 - INFO - Epoch [19/100] train_loss: 422.4816, val_loss: 486.4580, lr: 0.000100, 101.85s 2023-08-09 16:28:03,933 - INFO - Saved model at 19 2023-08-09 16:28:03,933 - INFO - Val loss decrease from 496.1740 to 486.4580, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch19.tar 2023-08-09 16:29:26,639 - INFO - epoch complete! 2023-08-09 16:29:26,640 - INFO - evaluating now! 2023-08-09 16:29:43,152 - INFO - Epoch [20/100] train_loss: 415.6089, val_loss: 480.7355, lr: 0.000100, 99.22s 2023-08-09 16:29:43,209 - INFO - Saved model at 20 2023-08-09 16:29:43,209 - INFO - Val loss decrease from 486.4580 to 480.7355, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch20.tar 2023-08-09 16:31:06,466 - INFO - epoch complete! 2023-08-09 16:31:06,466 - INFO - evaluating now! 2023-08-09 16:31:23,045 - INFO - Epoch [21/100] train_loss: 408.9518, val_loss: 473.4477, lr: 0.000100, 99.84s 2023-08-09 16:31:23,102 - INFO - Saved model at 21 2023-08-09 16:31:23,103 - INFO - Val loss decrease from 480.7355 to 473.4477, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch21.tar 2023-08-09 16:32:47,617 - INFO - epoch complete! 2023-08-09 16:32:47,617 - INFO - evaluating now! 2023-08-09 16:33:04,503 - INFO - Epoch [22/100] train_loss: 403.1863, val_loss: 471.0713, lr: 0.000100, 101.40s 2023-08-09 16:33:04,565 - INFO - Saved model at 22 2023-08-09 16:33:04,565 - INFO - Val loss decrease from 473.4477 to 471.0713, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch22.tar 2023-08-09 16:34:26,573 - INFO - epoch complete! 2023-08-09 16:34:26,574 - INFO - evaluating now! 2023-08-09 16:34:43,137 - INFO - Epoch [23/100] train_loss: 398.0524, val_loss: 466.4490, lr: 0.000100, 98.57s 2023-08-09 16:34:43,202 - INFO - Saved model at 23 2023-08-09 16:34:43,202 - INFO - Val loss decrease from 471.0713 to 466.4490, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch23.tar 2023-08-09 16:36:06,874 - INFO - epoch complete! 2023-08-09 16:36:06,875 - INFO - evaluating now! 2023-08-09 16:36:23,645 - INFO - Epoch [24/100] train_loss: 393.3629, val_loss: 461.0356, lr: 0.000100, 100.44s 2023-08-09 16:36:23,704 - INFO - Saved model at 24 2023-08-09 16:36:23,704 - INFO - Val loss decrease from 466.4490 to 461.0356, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch24.tar 2023-08-09 16:37:48,414 - INFO - epoch complete! 2023-08-09 16:37:48,414 - INFO - evaluating now! 2023-08-09 16:38:04,839 - INFO - Epoch [25/100] train_loss: 388.6540, val_loss: 458.8537, lr: 0.000100, 101.14s 2023-08-09 16:38:04,897 - INFO - Saved model at 25 2023-08-09 16:38:04,898 - INFO - Val loss decrease from 461.0356 to 458.8537, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch25.tar 2023-08-09 16:39:27,455 - INFO - epoch complete! 2023-08-09 16:39:27,455 - INFO - evaluating now! 2023-08-09 16:39:43,986 - INFO - Epoch [26/100] train_loss: 384.2978, val_loss: 457.4638, lr: 0.000100, 99.09s 2023-08-09 16:39:44,042 - INFO - Saved model at 26 2023-08-09 16:39:44,042 - INFO - Val loss decrease from 458.8537 to 457.4638, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch26.tar 2023-08-09 16:41:09,163 - INFO - epoch complete! 2023-08-09 16:41:09,164 - INFO - evaluating now! 2023-08-09 16:41:25,946 - INFO - Epoch [27/100] train_loss: 380.4705, val_loss: 452.5569, lr: 0.000100, 101.90s 2023-08-09 16:41:26,004 - INFO - Saved model at 27 2023-08-09 16:41:26,005 - INFO - Val loss decrease from 457.4638 to 452.5569, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch27.tar 2023-08-09 16:42:49,370 - INFO - epoch complete! 2023-08-09 16:42:49,371 - INFO - evaluating now! 2023-08-09 16:43:05,992 - INFO - Epoch [28/100] train_loss: 376.6974, val_loss: 447.9182, lr: 0.000100, 99.99s 2023-08-09 16:43:06,051 - INFO - Saved model at 28 2023-08-09 16:43:06,051 - INFO - Val loss decrease from 452.5569 to 447.9182, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch28.tar 2023-08-09 16:44:28,209 - INFO - epoch complete! 2023-08-09 16:44:28,210 - INFO - evaluating now! 2023-08-09 16:44:45,033 - INFO - Epoch [29/100] train_loss: 373.3920, val_loss: 443.4390, lr: 0.000100, 98.98s 2023-08-09 16:44:45,092 - INFO - Saved model at 29 2023-08-09 16:44:45,092 - INFO - Val loss decrease from 447.9182 to 443.4390, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch29.tar 2023-08-09 16:46:09,252 - INFO - epoch complete! 2023-08-09 16:46:09,252 - INFO - evaluating now! 2023-08-09 16:46:26,459 - INFO - Epoch [30/100] train_loss: 370.0457, val_loss: 442.4330, lr: 0.000100, 101.37s 2023-08-09 16:46:26,523 - INFO - Saved model at 30 2023-08-09 16:46:26,523 - INFO - Val loss decrease from 443.4390 to 442.4330, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch30.tar 2023-08-09 16:47:49,394 - INFO - epoch complete! 2023-08-09 16:47:49,394 - INFO - evaluating now! 2023-08-09 16:48:05,906 - INFO - Epoch [31/100] train_loss: 366.8464, val_loss: 436.8745, lr: 0.000100, 99.38s 2023-08-09 16:48:05,964 - INFO - Saved model at 31 2023-08-09 16:48:05,964 - INFO - Val loss decrease from 442.4330 to 436.8745, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch31.tar 2023-08-09 16:49:29,383 - INFO - epoch complete! 2023-08-09 16:49:29,383 - INFO - evaluating now! 2023-08-09 16:49:46,276 - INFO - Epoch [32/100] train_loss: 364.0575, val_loss: 435.8164, lr: 0.000100, 100.31s 2023-08-09 16:49:46,334 - INFO - Saved model at 32 2023-08-09 16:49:46,335 - INFO - Val loss decrease from 436.8745 to 435.8164, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch32.tar 2023-08-09 16:51:11,435 - INFO - epoch complete! 2023-08-09 16:51:11,435 - INFO - evaluating now! 2023-08-09 16:51:28,070 - INFO - Epoch [33/100] train_loss: 361.2620, val_loss: 432.2225, lr: 0.000100, 101.74s 2023-08-09 16:51:28,128 - INFO - Saved model at 33 2023-08-09 16:51:28,128 - INFO - Val loss decrease from 435.8164 to 432.2225, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch33.tar 2023-08-09 16:52:49,880 - INFO - epoch complete! 2023-08-09 16:52:49,880 - INFO - evaluating now! 2023-08-09 16:53:06,208 - INFO - Epoch [34/100] train_loss: 359.0331, val_loss: 429.9115, lr: 0.000100, 98.08s 2023-08-09 16:53:06,266 - INFO - Saved model at 34 2023-08-09 16:53:06,267 - INFO - Val loss decrease from 432.2225 to 429.9115, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch34.tar 2023-08-09 16:54:29,374 - INFO - epoch complete! 2023-08-09 16:54:29,374 - INFO - evaluating now! 2023-08-09 16:54:46,076 - INFO - Epoch [35/100] train_loss: 357.4683, val_loss: 429.0914, lr: 0.000100, 99.81s 2023-08-09 16:54:46,134 - INFO - Saved model at 35 2023-08-09 16:54:46,134 - INFO - Val loss decrease from 429.9115 to 429.0914, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch35.tar 2023-08-09 16:56:09,274 - INFO - epoch complete! 2023-08-09 16:56:09,274 - INFO - evaluating now! 2023-08-09 16:56:25,861 - INFO - Epoch [36/100] train_loss: 354.7035, val_loss: 426.1112, lr: 0.000100, 99.73s 2023-08-09 16:56:25,920 - INFO - Saved model at 36 2023-08-09 16:56:25,920 - INFO - Val loss decrease from 429.0914 to 426.1112, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch36.tar 2023-08-09 16:57:48,279 - INFO - epoch complete! 2023-08-09 16:57:48,280 - INFO - evaluating now! 2023-08-09 16:58:05,159 - INFO - Epoch [37/100] train_loss: 352.5587, val_loss: 423.6635, lr: 0.000100, 99.24s 2023-08-09 16:58:05,219 - INFO - Saved model at 37 2023-08-09 16:58:05,219 - INFO - Val loss decrease from 426.1112 to 423.6635, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch37.tar 2023-08-09 16:59:30,696 - INFO - epoch complete! 2023-08-09 16:59:30,697 - INFO - evaluating now! 2023-08-09 16:59:47,604 - INFO - Epoch [38/100] train_loss: 350.5012, val_loss: 424.5287, lr: 0.000100, 102.38s 2023-08-09 17:01:11,323 - INFO - epoch complete! 2023-08-09 17:01:11,323 - INFO - evaluating now! 2023-08-09 17:01:27,776 - INFO - Epoch [39/100] train_loss: 349.1850, val_loss: 419.0959, lr: 0.000100, 100.17s 2023-08-09 17:01:27,834 - INFO - Saved model at 39 2023-08-09 17:01:27,834 - INFO - Val loss decrease from 423.6635 to 419.0959, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch39.tar 2023-08-09 17:02:50,557 - INFO - epoch complete! 2023-08-09 17:02:50,558 - INFO - evaluating now! 2023-08-09 17:03:07,292 - INFO - Epoch [40/100] train_loss: 347.1738, val_loss: 418.5162, lr: 0.000100, 99.46s 2023-08-09 17:03:07,350 - INFO - Saved model at 40 2023-08-09 17:03:07,350 - INFO - Val loss decrease from 419.0959 to 418.5162, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch40.tar 2023-08-09 17:04:32,938 - INFO - epoch complete! 2023-08-09 17:04:32,938 - INFO - evaluating now! 2023-08-09 17:04:50,038 - INFO - Epoch [41/100] train_loss: 345.4858, val_loss: 419.8255, lr: 0.000100, 102.69s 2023-08-09 17:06:12,587 - INFO - epoch complete! 2023-08-09 17:06:12,587 - INFO - evaluating now! 2023-08-09 17:06:29,130 - INFO - Epoch [42/100] train_loss: 344.1218, val_loss: 415.7969, lr: 0.000100, 99.09s 2023-08-09 17:06:29,189 - INFO - Saved model at 42 2023-08-09 17:06:29,189 - INFO - Val loss decrease from 418.5162 to 415.7969, saving to ./libcity/cache/8834/model_cache/ASTGCN_PEMSD4_epoch42.tar

aptx1231 commented 1 year ago

是这样的,你改一下数据集的config.json就行了 一般都只用traffic_flow这个特征

OvOYu commented 1 year ago

谢谢 再问一下:打印的1-12个时间步损失的代码在哪一块呢

aptx1231 commented 1 year ago

evaluator/traffic_state_evaluator.py

OvOYu commented 1 year ago

谢谢哥的指导!祝您事业有成!我最近想通过你们的框架实现自己的model,希望遇到问题哥能指点一下,感谢!

OvOYu commented 1 year ago

image 这是正常的吗

aptx1231 commented 1 year ago

正常吧,你可以看下代码,他用的mse loss,都比较大

aptx1231 commented 1 year ago

流量: "info": { "data_col": [ "traffic_flow" ], "weight_col": "cost", "data_files": [ "PEMSD4" ], "geo_file": "PEMSD4", "rel_file": "PEMSD4", "output_dim": 1, "time_intervals": 300, "init_weight_inf_or_zero": "zero", "set_weight_link_or_dist": "link", "calculate_weight_adj": false, "weight_adj_epsilon": 0.1 }

速度: "info": { "data_col": [ "traffic_speed" ], "weight_col": "cost", "data_files": [ "PEMSD4" ], "geo_file": "PEMSD4", "rel_file": "PEMSD4", "output_dim": 1, "time_intervals": 300, "init_weight_inf_or_zero": "zero", "set_weight_link_or_dist": "link", "calculate_weight_adj": false, "weight_adj_epsilon": 0.1 }

aptx1231 commented 1 year ago

另外config.json按照上边的例子修改一下

OvOYu commented 1 year ago

image 好的 我按这样的训练的 我想问一下这里的误差为什么是1000多的 我在运行别的代码显示的基本都是个位 有点疑惑 下面的图是我正在运行的另一个model image

aptx1231 commented 1 year ago

因为这个模型的代码,你看他用的是mse,而且没有设置masked的值是0. 你可以对比看下不同模型的loss函数,在模型名.py里边 这个loss是可以换的,自己换一下就行了

OvOYu commented 1 year ago

谢谢!!