wangxr0526 / RetroPrime

Code for Single-step Retrosynthesis model Retroprime
MIT License
32 stars 6 forks source link

Acc drops rapidly when training the P2S model in the uspto-full dataset #6

Open LaHauzel opened 1 year ago

LaHauzel commented 1 year ago

i would like to reproduce the result in the uspto-full dataset, but got some problems here: the accuracy of P2S drops rapidly.

i have trained over 50,000 steps and the acc was about 50%. is this normal?

[2022-08-09 14:28:33,864 INFO] encoder: 41252864
[2022-08-09 14:28:33,865 INFO] decoder: 54924817
[2022-08-09 14:28:33,865 INFO] * number of parameters: 96177681
[2022-08-09 14:28:33,889 INFO] Start training...
[2022-08-09 14:28:41,843 INFO] Loading train dataset from data/uspto_full_pos_pred/uspto_full_pos_pred.train.0.pt, number of examples: 1000000
[2022-08-09 14:28:41,844 INFO] train_iter finished
[2022-08-09 15:03:38,255 INFO] Step 1000/250000; acc:  81.60; ppl:  1.78; xent: 0.58; lr: 0.00012; 7395/7535 tok/s;   2096 sec
[2022-08-09 15:38:33,482 INFO] Step 2000/250000; acc:  95.38; ppl:  1.15; xent: 0.14; lr: 0.00025; 7380/7566 tok/s;   4192 sec
[2022-08-09 16:05:19,712 INFO] Loading train dataset from data/uspto_full_pos_pred/uspto_full_pos_pred.train.1.pt, number of examples: 1000000
[2022-08-09 16:13:46,337 INFO] Step 3000/250000; acc:  97.17; ppl:  1.09; xent: 0.09; lr: 0.00037; 7480/7610 tok/s;   6304 sec
[2022-08-09 16:48:51,050 INFO] Step 4000/250000; acc:  96.94; ppl:  1.11; xent: 0.11; lr: 0.00049; 7355/7484 tok/s;   8409 sec
[2022-08-09 17:23:55,865 INFO] Step 5000/250000; acc:  96.57; ppl:  1.11; xent: 0.10; lr: 0.00062; 7270/7407 tok/s;  10514 sec
[2022-08-09 17:42:44,180 INFO] Loading train dataset from data/uspto_full_pos_pred/uspto_full_pos_pred.train.2.pt, number of examples: 1000000
[2022-08-09 17:59:22,875 INFO] Step 6000/250000; acc:  96.90; ppl:  1.09; xent: 0.09; lr: 0.00074; 7399/7564 tok/s;  12641 sec
[2022-08-09 18:34:50,745 INFO] Step 7000/250000; acc:  95.93; ppl:  1.12; xent: 0.11; lr: 0.00086; 7320/7514 tok/s;  14769 sec
[2022-08-09 19:10:08,914 INFO] Step 8000/250000; acc:  96.79; ppl:  1.10; xent: 0.09; lr: 0.00099; 7289/7435 tok/s;  16887 sec
[2022-08-09 19:20:38,308 INFO] Loading train dataset from data/uspto_full_pos_pred/uspto_full_pos_pred.train.3.pt, number of examples: 1000000
[2022-08-09 19:45:17,775 INFO] Step 9000/250000; acc:  96.71; ppl:  1.11; xent: 0.10; lr: 0.00093; 7414/7583 tok/s;  18996 sec
[2022-08-09 20:20:10,783 INFO] Step 10000/250000; acc:  96.74; ppl:  1.11; xent: 0.10; lr: 0.00088; 7208/7353 tok/s;  21089 sec
[2022-08-09 20:20:10,787 INFO] Saving checkpoint experiments/checkpoints/uspto_full_pos_pred/151_uspto_full_pos_pred_model_step_10000.pt
[2022-08-09 20:55:25,013 INFO] Step 11000/250000; acc:  88.51; ppl:  1.44; xent: 0.36; lr: 0.00084; 6400/6718 tok/s;  23203 sec
[2022-08-09 21:00:21,009 INFO] Loading train dataset from data/uspto_full_pos_pred/uspto_full_pos_pred.train.4.pt, number of examples: 1000000
[2022-08-09 21:30:28,027 INFO] Step 12000/250000; acc:  92.56; ppl:  1.27; xent: 0.24; lr: 0.00081; 7225/7443 tok/s;  25306 sec
[2022-08-09 22:05:20,563 INFO] Step 13000/250000; acc:  85.15; ppl:  1.60; xent: 0.47; lr: 0.00078; 7314/7448 tok/s;  27399 sec
[2022-08-09 22:40:05,992 INFO] Step 14000/250000; acc:  62.15; ppl:  3.08; xent: 1.13; lr: 0.00075; 7438/7593 tok/s;  29484 sec
[2022-08-09 22:40:22,370 INFO] Loading train dataset from data/uspto_full_pos_pred/uspto_full_pos_pred.train.5.pt, number of examples: 1000000
[2022-08-09 23:14:50,982 INFO] Step 15000/250000; acc:  48.91; ppl:  4.95; xent: 1.60; lr: 0.00072; 7404/7537 tok/s;  31569 sec
[2022-08-09 23:49:25,286 INFO] Step 16000/250000; acc:  51.57; ppl:  4.75; xent: 1.56; lr: 0.00070; 6632/7134 tok/s;  33643 sec
[2022-08-10 00:19:41,798 INFO] Loading train dataset from data/uspto_full_pos_pred/uspto_full_pos_pred.train.6.pt, number of examples: 1000000
[2022-08-10 00:24:05,406 INFO] Step 17000/250000; acc:  52.31; ppl:  4.61; xent: 1.53; lr: 0.00068; 7293/7659 tok/s;  35724 sec
[2022-08-10 00:58:41,297 INFO] Step 18000/250000; acc:  50.13; ppl:  4.72; xent: 1.55; lr: 0.00066; 6996/7350 tok/s;  37799 sec
[2022-08-10 01:33:25,863 INFO] Step 19000/250000; acc:  52.59; ppl:  4.30; xent: 1.46; lr: 0.00064; 7378/7524 tok/s;  39884 sec

any reply would be greatly appreciated!

wangxr0526 commented 1 year ago

There is something strange about your result. There are more parameters of your model, which is the log when I train.

[2020-06-18 22:36:52,675 INFO] encoder: 17196544
[2020-06-18 22:36:52,676 INFO] decoder: 20624775
[2020-06-18 22:36:52,676 INFO] * number of parameters: 37821319
[2020-06-18 22:36:52,680 INFO] Start training...
[2020-06-18 22:37:02,454 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-18 22:50:40,693 INFO] Step 1000/250000; acc:  80.61; ppl:  1.85; xent: 0.61; lr: 0.00017; 19217/19596 tok/s;    818 sec
[2020-06-18 23:04:13,162 INFO] Step 2000/250000; acc:  90.84; ppl:  1.32; xent: 0.28; lr: 0.00035; 19668/20046 tok/s;   1631 sec
[2020-06-18 23:15:21,067 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-18 23:18:13,975 INFO] Step 3000/250000; acc:  96.01; ppl:  1.12; xent: 0.12; lr: 0.00052; 18374/18751 tok/s;   2472 sec
[2020-06-18 23:32:11,225 INFO] Step 4000/250000; acc:  96.65; ppl:  1.10; xent: 0.10; lr: 0.00070; 18587/19006 tok/s;   3309 sec
[2020-06-18 23:46:05,067 INFO] Step 5000/250000; acc:  96.29; ppl:  1.11; xent: 0.10; lr: 0.00087; 19264/19748 tok/s;   4143 sec
[2020-06-18 23:54:32,679 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 00:00:19,280 INFO] Step 6000/250000; acc:  96.68; ppl:  1.10; xent: 0.09; lr: 0.00105; 18430/18924 tok/s;   4997 sec
[2020-06-19 00:14:16,937 INFO] Step 7000/250000; acc:  96.76; ppl:  1.09; xent: 0.09; lr: 0.00122; 18929/19435 tok/s;   5834 sec
[2020-06-19 00:28:03,421 INFO] Step 8000/250000; acc:  95.65; ppl:  1.14; xent: 0.13; lr: 0.00140; 18262/18668 tok/s;   6661 sec
[2020-06-19 00:33:36,229 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 00:42:11,240 INFO] Step 9000/250000; acc:  96.74; ppl:  1.10; xent: 0.09; lr: 0.00132; 17961/18408 tok/s;   7509 sec
[2020-06-19 00:55:54,384 INFO] Step 10000/250000; acc:  98.10; ppl:  1.06; xent: 0.06; lr: 0.00125; 19304/19586 tok/s;   8332 sec
[2020-06-19 00:55:54,754 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 00:58:47,280 INFO] Validation perplexity: 1.0838
[2020-06-19 00:58:47,280 INFO] Validation accuracy: 97.17
[2020-06-19 00:58:47,283 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_10000.pt
[2020-06-19 01:12:30,902 INFO] Step 11000/250000; acc:  97.10; ppl:  1.09; xent: 0.08; lr: 0.00119; 19133/19609 tok/s;   9328 sec
[2020-06-19 01:15:08,032 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 01:26:28,185 INFO] Step 12000/250000; acc:  96.97; ppl:  1.09; xent: 0.09; lr: 0.00114; 18967/19467 tok/s;  10166 sec
[2020-06-19 01:40:10,221 INFO] Step 13000/250000; acc:  96.69; ppl:  1.09; xent: 0.09; lr: 0.00110; 18981/19562 tok/s;  10988 sec
[2020-06-19 01:53:38,439 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 01:54:07,597 INFO] Step 14000/250000; acc:  97.77; ppl:  1.06; xent: 0.06; lr: 0.00106; 19650/20056 tok/s;  11825 sec
[2020-06-19 02:07:43,204 INFO] Step 15000/250000; acc:  97.49; ppl:  1.07; xent: 0.07; lr: 0.00102; 19580/20035 tok/s;  12641 sec
[2020-06-19 02:21:17,203 INFO] Step 16000/250000; acc:  97.46; ppl:  1.07; xent: 0.07; lr: 0.00099; 19184/19635 tok/s;  13455 sec
[2020-06-19 02:32:17,909 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 02:35:32,193 INFO] Step 17000/250000; acc:  97.61; ppl:  1.07; xent: 0.06; lr: 0.00096; 19575/19978 tok/s;  14310 sec
[2020-06-19 02:49:12,476 INFO] Step 18000/250000; acc:  97.68; ppl:  1.07; xent: 0.07; lr: 0.00093; 19069/19463 tok/s;  15130 sec
[2020-06-19 03:02:56,211 INFO] Step 19000/250000; acc:  96.87; ppl:  1.09; xent: 0.09; lr: 0.00091; 18738/19345 tok/s;  15954 sec
[2020-06-19 03:10:53,807 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 03:17:11,109 INFO] Step 20000/250000; acc:  97.78; ppl:  1.07; xent: 0.06; lr: 0.00088; 18364/18793 tok/s;  16809 sec
[2020-06-19 03:17:12,856 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 03:20:10,935 INFO] Validation perplexity: 1.07151
[2020-06-19 03:20:10,935 INFO] Validation accuracy: 97.5526
[2020-06-19 03:20:10,939 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_20000.pt
[2020-06-19 03:34:04,538 INFO] Step 21000/250000; acc:  96.57; ppl:  1.10; xent: 0.09; lr: 0.00086; 18436/19035 tok/s;  17822 sec
[2020-06-19 03:36:31,871 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 03:48:13,140 INFO] Step 22000/250000; acc:  98.31; ppl:  1.05; xent: 0.05; lr: 0.00084; 19101/19403 tok/s;  18671 sec
[2020-06-19 04:02:09,840 INFO] Step 23000/250000; acc:  96.89; ppl:  1.09; xent: 0.08; lr: 0.00082; 18634/19147 tok/s;  19507 sec
[2020-06-19 04:15:47,451 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 04:16:20,862 INFO] Step 24000/250000; acc:  97.34; ppl:  1.08; xent: 0.07; lr: 0.00081; 18775/19221 tok/s;  20358 sec
[2020-06-19 04:29:58,688 INFO] Step 25000/250000; acc:  97.28; ppl:  1.07; xent: 0.07; lr: 0.00079; 19054/19479 tok/s;  21176 sec
[2020-06-19 04:43:39,857 INFO] Step 26000/250000; acc:  98.15; ppl:  1.05; xent: 0.05; lr: 0.00078; 19073/19458 tok/s;  21997 sec
[2020-06-19 04:54:19,760 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 04:57:44,830 INFO] Step 27000/250000; acc:  97.56; ppl:  1.07; xent: 0.07; lr: 0.00076; 19056/19535 tok/s;  22842 sec
[2020-06-19 05:11:31,931 INFO] Step 28000/250000; acc:  97.28; ppl:  1.08; xent: 0.07; lr: 0.00075; 19387/19902 tok/s;  23669 sec
[2020-06-19 05:25:10,977 INFO] Step 29000/250000; acc:  98.10; ppl:  1.05; xent: 0.05; lr: 0.00073; 19276/19677 tok/s;  24489 sec
[2020-06-19 05:32:51,424 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 05:39:13,205 INFO] Step 30000/250000; acc:  97.39; ppl:  1.07; xent: 0.07; lr: 0.00072; 17898/18370 tok/s;  25331 sec
[2020-06-19 05:39:13,594 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 05:42:13,541 INFO] Validation perplexity: 1.06626
[2020-06-19 05:42:13,541 INFO] Validation accuracy: 97.7296
[2020-06-19 05:42:13,545 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_30000.pt
[2020-06-19 05:56:10,908 INFO] Step 31000/250000; acc:  97.86; ppl:  1.06; xent: 0.06; lr: 0.00071; 18412/18809 tok/s;  26348 sec
[2020-06-19 06:10:03,200 INFO] Step 32000/250000; acc:  97.93; ppl:  1.06; xent: 0.06; lr: 0.00070; 18636/19005 tok/s;  27181 sec
[2020-06-19 06:15:03,229 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 06:24:15,464 INFO] Step 33000/250000; acc:  98.27; ppl:  1.05; xent: 0.05; lr: 0.00069; 19483/19817 tok/s;  28033 sec
[2020-06-19 06:38:06,823 INFO] Step 34000/250000; acc:  97.20; ppl:  1.08; xent: 0.08; lr: 0.00068; 18256/18811 tok/s;  28864 sec
[2020-06-19 06:52:05,825 INFO] Step 35000/250000; acc:  96.85; ppl:  1.09; xent: 0.08; lr: 0.00067; 17980/18559 tok/s;  29703 sec
[2020-06-19 06:54:10,815 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 07:06:11,596 INFO] Step 36000/250000; acc:  97.01; ppl:  1.08; xent: 0.08; lr: 0.00066; 19371/19969 tok/s;  30549 sec
[2020-06-19 07:19:48,447 INFO] Step 37000/250000; acc:  98.12; ppl:  1.05; xent: 0.05; lr: 0.00065; 19150/19592 tok/s;  31366 sec
[2020-06-19 07:32:34,551 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 07:33:32,540 INFO] Step 38000/250000; acc:  97.50; ppl:  1.07; xent: 0.06; lr: 0.00064; 19501/20002 tok/s;  32190 sec
[2020-06-19 07:47:01,783 INFO] Step 39000/250000; acc:  97.82; ppl:  1.06; xent: 0.06; lr: 0.00063; 18966/19418 tok/s;  32999 sec
[2020-06-19 08:00:33,737 INFO] Step 40000/250000; acc:  97.94; ppl:  1.06; xent: 0.06; lr: 0.00063; 19334/19805 tok/s;  33811 sec
[2020-06-19 08:00:34,098 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 08:03:22,961 INFO] Validation perplexity: 1.06356
[2020-06-19 08:03:22,962 INFO] Validation accuracy: 97.832
[2020-06-19 08:03:22,965 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_40000.pt
[2020-06-19 08:13:13,596 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 08:17:02,532 INFO] Step 41000/250000; acc:  96.77; ppl:  1.09; xent: 0.09; lr: 0.00062; 18116/18756 tok/s;  34800 sec
[2020-06-19 08:30:32,377 INFO] Step 42000/250000; acc:  97.32; ppl:  1.07; xent: 0.07; lr: 0.00061; 18516/19117 tok/s;  35610 sec
[2020-06-19 08:35:19,854 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 08:44:44,330 INFO] Step 43000/250000; acc:  98.10; ppl:  1.05; xent: 0.05; lr: 0.00060; 18919/19313 tok/s;  36462 sec
[2020-06-19 08:58:42,361 INFO] Step 44000/250000; acc:  97.63; ppl:  1.07; xent: 0.07; lr: 0.00060; 18428/18969 tok/s;  37300 sec
[2020-06-19 09:12:28,272 INFO] Step 45000/250000; acc:  97.53; ppl:  1.07; xent: 0.07; lr: 0.00059; 18442/18954 tok/s;  38126 sec
[2020-06-19 09:14:27,416 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 09:26:16,478 INFO] Step 46000/250000; acc:  97.70; ppl:  1.07; xent: 0.06; lr: 0.00058; 19305/19764 tok/s;  38954 sec
[2020-06-19 09:40:07,394 INFO] Step 47000/250000; acc:  98.04; ppl:  1.06; xent: 0.05; lr: 0.00058; 18434/18810 tok/s;  39785 sec
[2020-06-19 09:53:02,626 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 09:54:12,603 INFO] Step 48000/250000; acc:  97.50; ppl:  1.07; xent: 0.07; lr: 0.00057; 18893/19376 tok/s;  40630 sec
[2020-06-19 10:07:40,828 INFO] Step 49000/250000; acc:  97.87; ppl:  1.06; xent: 0.06; lr: 0.00056; 19467/19932 tok/s;  41438 sec
[2020-06-19 10:21:37,385 INFO] Step 50000/250000; acc:  96.73; ppl:  1.09; xent: 0.09; lr: 0.00056; 18793/19434 tok/s;  42275 sec
[2020-06-19 10:21:37,751 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 10:24:26,797 INFO] Validation perplexity: 1.06138
[2020-06-19 10:24:26,798 INFO] Validation accuracy: 97.9049
[2020-06-19 10:24:26,801 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_50000.pt
[2020-06-19 10:34:27,513 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 10:38:34,291 INFO] Step 51000/250000; acc:  97.99; ppl:  1.06; xent: 0.06; lr: 0.00055; 18391/18837 tok/s;  43292 sec
[2020-06-19 10:52:33,739 INFO] Step 52000/250000; acc:  98.05; ppl:  1.05; xent: 0.05; lr: 0.00055; 18938/19336 tok/s;  44131 sec
[2020-06-19 11:06:15,819 INFO] Step 53000/250000; acc:  97.31; ppl:  1.07; xent: 0.07; lr: 0.00054; 18719/19264 tok/s;  44953 sec
[2020-06-19 11:13:29,467 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 11:20:23,581 INFO] Step 54000/250000; acc:  98.61; ppl:  1.04; xent: 0.04; lr: 0.00054; 18541/18803 tok/s;  45801 sec
[2020-06-19 11:34:24,281 INFO] Step 55000/250000; acc:  98.22; ppl:  1.05; xent: 0.05; lr: 0.00053; 17604/17916 tok/s;  46642 sec
[2020-06-19 11:48:21,640 INFO] Step 56000/250000; acc:  98.14; ppl:  1.05; xent: 0.05; lr: 0.00053; 18484/18903 tok/s;  47479 sec
[2020-06-19 11:52:42,702 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 12:02:32,032 INFO] Step 57000/250000; acc:  97.96; ppl:  1.06; xent: 0.06; lr: 0.00052; 18355/18779 tok/s;  48330 sec
[2020-06-19 12:16:06,706 INFO] Step 58000/250000; acc:  97.73; ppl:  1.06; xent: 0.06; lr: 0.00052; 19168/19668 tok/s;  49144 sec
[2020-06-19 12:29:43,708 INFO] Step 59000/250000; acc:  96.81; ppl:  1.09; xent: 0.09; lr: 0.00051; 18093/18762 tok/s;  49961 sec
[2020-06-19 12:31:17,627 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 12:43:46,030 INFO] Step 60000/250000; acc:  98.12; ppl:  1.06; xent: 0.05; lr: 0.00051; 19137/19574 tok/s;  50804 sec
[2020-06-19 12:43:46,410 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 12:46:40,776 INFO] Validation perplexity: 1.05987
[2020-06-19 12:46:40,777 INFO] Validation accuracy: 97.9625
[2020-06-19 12:46:40,780 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_60000.pt
[2020-06-19 13:00:14,397 INFO] Step 61000/250000; acc:  98.29; ppl:  1.05; xent: 0.05; lr: 0.00051; 19355/19739 tok/s;  51792 sec
[2020-06-19 13:12:31,655 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 13:14:11,113 INFO] Step 62000/250000; acc:  98.00; ppl:  1.05; xent: 0.05; lr: 0.00050; 18730/19144 tok/s;  52629 sec
[2020-06-19 13:27:58,233 INFO] Step 63000/250000; acc:  97.94; ppl:  1.05; xent: 0.05; lr: 0.00050; 19203/19640 tok/s;  53456 sec
[2020-06-19 13:34:54,390 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 13:41:52,439 INFO] Step 64000/250000; acc:  98.30; ppl:  1.05; xent: 0.05; lr: 0.00049; 18893/19324 tok/s;  54290 sec
[2020-06-19 13:55:42,239 INFO] Step 65000/250000; acc:  97.91; ppl:  1.06; xent: 0.06; lr: 0.00049; 19042/19532 tok/s;  55120 sec
[2020-06-19 14:09:29,246 INFO] Step 66000/250000; acc:  98.08; ppl:  1.05; xent: 0.05; lr: 0.00049; 18172/18599 tok/s;  55947 sec
[2020-06-19 14:13:41,160 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 14:23:21,307 INFO] Step 67000/250000; acc:  98.64; ppl:  1.04; xent: 0.04; lr: 0.00048; 19317/19638 tok/s;  56779 sec
[2020-06-19 14:37:14,370 INFO] Step 68000/250000; acc:  98.23; ppl:  1.05; xent: 0.05; lr: 0.00048; 18344/18723 tok/s;  57612 sec
[2020-06-19 14:51:07,636 INFO] Step 69000/250000; acc:  97.76; ppl:  1.06; xent: 0.06; lr: 0.00048; 18732/19217 tok/s;  58445 sec
[2020-06-19 14:52:28,121 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 15:05:27,157 INFO] Step 70000/250000; acc:  98.09; ppl:  1.06; xent: 0.05; lr: 0.00047; 19051/19470 tok/s;  59305 sec
[2020-06-19 15:05:27,519 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 15:08:22,417 INFO] Validation perplexity: 1.05868
[2020-06-19 15:08:22,417 INFO] Validation accuracy: 97.996
[2020-06-19 15:08:22,421 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_70000.pt
[2020-06-19 15:22:13,316 INFO] Step 71000/250000; acc:  98.48; ppl:  1.04; xent: 0.04; lr: 0.00047; 18912/19246 tok/s;  60311 sec
[2020-06-19 15:34:32,135 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 15:36:16,131 INFO] Step 72000/250000; acc:  98.30; ppl:  1.05; xent: 0.05; lr: 0.00047; 19113/19487 tok/s;  61154 sec
[2020-06-19 15:50:00,462 INFO] Step 73000/250000; acc:  98.58; ppl:  1.04; xent: 0.04; lr: 0.00046; 18927/19270 tok/s;  61978 sec
[2020-06-19 16:03:37,505 INFO] Step 74000/250000; acc:  98.87; ppl:  1.03; xent: 0.03; lr: 0.00046; 19053/19331 tok/s;  62795 sec
[2020-06-19 16:12:55,380 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 16:17:26,943 INFO] Step 75000/250000; acc:  98.18; ppl:  1.05; xent: 0.05; lr: 0.00046; 19306/19743 tok/s;  63624 sec
[2020-06-19 16:31:29,920 INFO] Step 76000/250000; acc:  98.41; ppl:  1.05; xent: 0.04; lr: 0.00045; 18386/18760 tok/s;  64467 sec
[2020-06-19 16:45:36,531 INFO] Step 77000/250000; acc:  98.57; ppl:  1.04; xent: 0.04; lr: 0.00045; 17949/18296 tok/s;  65314 sec
[2020-06-19 16:52:08,026 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 16:59:29,717 INFO] Step 78000/250000; acc:  98.77; ppl:  1.04; xent: 0.04; lr: 0.00045; 19161/19508 tok/s;  66147 sec
[2020-06-19 17:13:28,740 INFO] Step 79000/250000; acc:  98.53; ppl:  1.04; xent: 0.04; lr: 0.00044; 18459/18796 tok/s;  66986 sec
[2020-06-19 17:27:35,942 INFO] Step 80000/250000; acc:  98.56; ppl:  1.04; xent: 0.04; lr: 0.00044; 18389/18732 tok/s;  67833 sec
[2020-06-19 17:27:36,335 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 17:30:43,359 INFO] Validation perplexity: 1.05824
[2020-06-19 17:30:43,360 INFO] Validation accuracy: 98.0287
[2020-06-19 17:30:43,363 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_80000.pt
[2020-06-19 17:34:32,536 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 17:44:42,711 INFO] Step 81000/250000; acc:  97.86; ppl:  1.06; xent: 0.06; lr: 0.00044; 18088/18615 tok/s;  68860 sec
[2020-06-19 17:58:33,363 INFO] Step 82000/250000; acc:  97.86; ppl:  1.06; xent: 0.06; lr: 0.00044; 19227/19678 tok/s;  69691 sec
[2020-06-19 18:13:10,997 INFO] Step 83000/250000; acc:  97.80; ppl:  1.06; xent: 0.06; lr: 0.00043; 15367/15824 tok/s;  70569 sec
[2020-06-19 18:14:06,502 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 18:30:12,797 INFO] Step 84000/250000; acc:  98.39; ppl:  1.05; xent: 0.04; lr: 0.00043; 15822/16115 tok/s;  71590 sec
[2020-06-19 18:41:32,348 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 18:47:26,979 INFO] Step 85000/250000; acc:  98.80; ppl:  1.03; xent: 0.03; lr: 0.00043; 15366/15616 tok/s;  72625 sec
[2020-06-19 19:04:28,036 INFO] Step 86000/250000; acc:  98.01; ppl:  1.06; xent: 0.05; lr: 0.00043; 15060/15434 tok/s;  73646 sec
[2020-06-19 19:21:37,599 INFO] Step 87000/250000; acc:  97.30; ppl:  1.08; xent: 0.07; lr: 0.00042; 15379/15882 tok/s;  74675 sec
[2020-06-19 19:29:39,995 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 19:38:56,113 INFO] Step 88000/250000; acc:  97.75; ppl:  1.07; xent: 0.06; lr: 0.00042; 14778/15192 tok/s;  75714 sec
[2020-06-19 19:56:01,158 INFO] Step 89000/250000; acc:  98.01; ppl:  1.06; xent: 0.06; lr: 0.00042; 15898/16277 tok/s;  76739 sec
[2020-06-19 20:13:00,499 INFO] Step 90000/250000; acc:  98.06; ppl:  1.05; xent: 0.05; lr: 0.00042; 15694/16070 tok/s;  77758 sec
[2020-06-19 20:13:00,882 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 20:16:14,688 INFO] Validation perplexity: 1.05767
[2020-06-19 20:16:14,689 INFO] Validation accuracy: 98.0615
[2020-06-19 20:16:14,692 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_90000.pt
[2020-06-19 20:20:21,604 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 20:31:51,322 INFO] Step 91000/250000; acc:  98.05; ppl:  1.06; xent: 0.05; lr: 0.00041; 16946/17306 tok/s;  78889 sec
[2020-06-19 20:47:05,792 INFO] Step 92000/250000; acc:  98.56; ppl:  1.04; xent: 0.04; lr: 0.00041; 16750/17022 tok/s;  79803 sec
[2020-06-19 21:02:15,473 INFO] Step 93000/250000; acc:  98.31; ppl:  1.05; xent: 0.05; lr: 0.00041; 17545/17905 tok/s;  80713 sec
[2020-06-19 21:03:04,438 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 21:16:45,975 INFO] Step 94000/250000; acc:  98.01; ppl:  1.06; xent: 0.05; lr: 0.00041; 18076/18523 tok/s;  81584 sec
[2020-06-19 21:30:24,590 INFO] Step 95000/250000; acc:  98.15; ppl:  1.05; xent: 0.05; lr: 0.00041; 18933/19393 tok/s;  82402 sec
[2020-06-19 21:42:03,520 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 21:44:21,252 INFO] Step 96000/250000; acc:  98.02; ppl:  1.06; xent: 0.06; lr: 0.00040; 19005/19485 tok/s;  83239 sec
[2020-06-19 21:58:37,121 INFO] Step 97000/250000; acc:  98.06; ppl:  1.05; xent: 0.05; lr: 0.00040; 17894/18299 tok/s;  84095 sec
[2020-06-19 22:12:52,309 INFO] Step 98000/250000; acc:  96.59; ppl:  1.09; xent: 0.09; lr: 0.00040; 17728/18393 tok/s;  84950 sec
[2020-06-19 22:22:03,871 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 22:27:28,360 INFO] Step 99000/250000; acc:  98.40; ppl:  1.05; xent: 0.05; lr: 0.00040; 18633/18997 tok/s;  85826 sec
[2020-06-19 22:41:28,638 INFO] Step 100000/250000; acc:  98.56; ppl:  1.04; xent: 0.04; lr: 0.00040; 18573/18955 tok/s;  86666 sec
[2020-06-19 22:41:29,017 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 22:44:39,584 INFO] Validation perplexity: 1.05674
[2020-06-19 22:44:39,584 INFO] Validation accuracy: 98.0708
LaHauzel commented 1 year ago

There is something strange about your result. There are more parameters of your model, which is the log when I train.

[2020-06-18 22:36:52,675 INFO] encoder: 17196544
[2020-06-18 22:36:52,676 INFO] decoder: 20624775
[2020-06-18 22:36:52,676 INFO] * number of parameters: 37821319
[2020-06-18 22:36:52,680 INFO] Start training...
[2020-06-18 22:37:02,454 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-18 22:50:40,693 INFO] Step 1000/250000; acc:  80.61; ppl:  1.85; xent: 0.61; lr: 0.00017; 19217/19596 tok/s;    818 sec
[2020-06-18 23:04:13,162 INFO] Step 2000/250000; acc:  90.84; ppl:  1.32; xent: 0.28; lr: 0.00035; 19668/20046 tok/s;   1631 sec
[2020-06-18 23:15:21,067 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-18 23:18:13,975 INFO] Step 3000/250000; acc:  96.01; ppl:  1.12; xent: 0.12; lr: 0.00052; 18374/18751 tok/s;   2472 sec
[2020-06-18 23:32:11,225 INFO] Step 4000/250000; acc:  96.65; ppl:  1.10; xent: 0.10; lr: 0.00070; 18587/19006 tok/s;   3309 sec
[2020-06-18 23:46:05,067 INFO] Step 5000/250000; acc:  96.29; ppl:  1.11; xent: 0.10; lr: 0.00087; 19264/19748 tok/s;   4143 sec
[2020-06-18 23:54:32,679 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 00:00:19,280 INFO] Step 6000/250000; acc:  96.68; ppl:  1.10; xent: 0.09; lr: 0.00105; 18430/18924 tok/s;   4997 sec
[2020-06-19 00:14:16,937 INFO] Step 7000/250000; acc:  96.76; ppl:  1.09; xent: 0.09; lr: 0.00122; 18929/19435 tok/s;   5834 sec
[2020-06-19 00:28:03,421 INFO] Step 8000/250000; acc:  95.65; ppl:  1.14; xent: 0.13; lr: 0.00140; 18262/18668 tok/s;   6661 sec
[2020-06-19 00:33:36,229 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 00:42:11,240 INFO] Step 9000/250000; acc:  96.74; ppl:  1.10; xent: 0.09; lr: 0.00132; 17961/18408 tok/s;   7509 sec
[2020-06-19 00:55:54,384 INFO] Step 10000/250000; acc:  98.10; ppl:  1.06; xent: 0.06; lr: 0.00125; 19304/19586 tok/s;   8332 sec
[2020-06-19 00:55:54,754 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 00:58:47,280 INFO] Validation perplexity: 1.0838
[2020-06-19 00:58:47,280 INFO] Validation accuracy: 97.17
[2020-06-19 00:58:47,283 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_10000.pt
[2020-06-19 01:12:30,902 INFO] Step 11000/250000; acc:  97.10; ppl:  1.09; xent: 0.08; lr: 0.00119; 19133/19609 tok/s;   9328 sec
[2020-06-19 01:15:08,032 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 01:26:28,185 INFO] Step 12000/250000; acc:  96.97; ppl:  1.09; xent: 0.09; lr: 0.00114; 18967/19467 tok/s;  10166 sec
[2020-06-19 01:40:10,221 INFO] Step 13000/250000; acc:  96.69; ppl:  1.09; xent: 0.09; lr: 0.00110; 18981/19562 tok/s;  10988 sec
[2020-06-19 01:53:38,439 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 01:54:07,597 INFO] Step 14000/250000; acc:  97.77; ppl:  1.06; xent: 0.06; lr: 0.00106; 19650/20056 tok/s;  11825 sec
[2020-06-19 02:07:43,204 INFO] Step 15000/250000; acc:  97.49; ppl:  1.07; xent: 0.07; lr: 0.00102; 19580/20035 tok/s;  12641 sec
[2020-06-19 02:21:17,203 INFO] Step 16000/250000; acc:  97.46; ppl:  1.07; xent: 0.07; lr: 0.00099; 19184/19635 tok/s;  13455 sec
[2020-06-19 02:32:17,909 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 02:35:32,193 INFO] Step 17000/250000; acc:  97.61; ppl:  1.07; xent: 0.06; lr: 0.00096; 19575/19978 tok/s;  14310 sec
[2020-06-19 02:49:12,476 INFO] Step 18000/250000; acc:  97.68; ppl:  1.07; xent: 0.07; lr: 0.00093; 19069/19463 tok/s;  15130 sec
[2020-06-19 03:02:56,211 INFO] Step 19000/250000; acc:  96.87; ppl:  1.09; xent: 0.09; lr: 0.00091; 18738/19345 tok/s;  15954 sec
[2020-06-19 03:10:53,807 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 03:17:11,109 INFO] Step 20000/250000; acc:  97.78; ppl:  1.07; xent: 0.06; lr: 0.00088; 18364/18793 tok/s;  16809 sec
[2020-06-19 03:17:12,856 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 03:20:10,935 INFO] Validation perplexity: 1.07151
[2020-06-19 03:20:10,935 INFO] Validation accuracy: 97.5526
[2020-06-19 03:20:10,939 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_20000.pt
[2020-06-19 03:34:04,538 INFO] Step 21000/250000; acc:  96.57; ppl:  1.10; xent: 0.09; lr: 0.00086; 18436/19035 tok/s;  17822 sec
[2020-06-19 03:36:31,871 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 03:48:13,140 INFO] Step 22000/250000; acc:  98.31; ppl:  1.05; xent: 0.05; lr: 0.00084; 19101/19403 tok/s;  18671 sec
[2020-06-19 04:02:09,840 INFO] Step 23000/250000; acc:  96.89; ppl:  1.09; xent: 0.08; lr: 0.00082; 18634/19147 tok/s;  19507 sec
[2020-06-19 04:15:47,451 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 04:16:20,862 INFO] Step 24000/250000; acc:  97.34; ppl:  1.08; xent: 0.07; lr: 0.00081; 18775/19221 tok/s;  20358 sec
[2020-06-19 04:29:58,688 INFO] Step 25000/250000; acc:  97.28; ppl:  1.07; xent: 0.07; lr: 0.00079; 19054/19479 tok/s;  21176 sec
[2020-06-19 04:43:39,857 INFO] Step 26000/250000; acc:  98.15; ppl:  1.05; xent: 0.05; lr: 0.00078; 19073/19458 tok/s;  21997 sec
[2020-06-19 04:54:19,760 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 04:57:44,830 INFO] Step 27000/250000; acc:  97.56; ppl:  1.07; xent: 0.07; lr: 0.00076; 19056/19535 tok/s;  22842 sec
[2020-06-19 05:11:31,931 INFO] Step 28000/250000; acc:  97.28; ppl:  1.08; xent: 0.07; lr: 0.00075; 19387/19902 tok/s;  23669 sec
[2020-06-19 05:25:10,977 INFO] Step 29000/250000; acc:  98.10; ppl:  1.05; xent: 0.05; lr: 0.00073; 19276/19677 tok/s;  24489 sec
[2020-06-19 05:32:51,424 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 05:39:13,205 INFO] Step 30000/250000; acc:  97.39; ppl:  1.07; xent: 0.07; lr: 0.00072; 17898/18370 tok/s;  25331 sec
[2020-06-19 05:39:13,594 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 05:42:13,541 INFO] Validation perplexity: 1.06626
[2020-06-19 05:42:13,541 INFO] Validation accuracy: 97.7296
[2020-06-19 05:42:13,545 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_30000.pt
[2020-06-19 05:56:10,908 INFO] Step 31000/250000; acc:  97.86; ppl:  1.06; xent: 0.06; lr: 0.00071; 18412/18809 tok/s;  26348 sec
[2020-06-19 06:10:03,200 INFO] Step 32000/250000; acc:  97.93; ppl:  1.06; xent: 0.06; lr: 0.00070; 18636/19005 tok/s;  27181 sec
[2020-06-19 06:15:03,229 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 06:24:15,464 INFO] Step 33000/250000; acc:  98.27; ppl:  1.05; xent: 0.05; lr: 0.00069; 19483/19817 tok/s;  28033 sec
[2020-06-19 06:38:06,823 INFO] Step 34000/250000; acc:  97.20; ppl:  1.08; xent: 0.08; lr: 0.00068; 18256/18811 tok/s;  28864 sec
[2020-06-19 06:52:05,825 INFO] Step 35000/250000; acc:  96.85; ppl:  1.09; xent: 0.08; lr: 0.00067; 17980/18559 tok/s;  29703 sec
[2020-06-19 06:54:10,815 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 07:06:11,596 INFO] Step 36000/250000; acc:  97.01; ppl:  1.08; xent: 0.08; lr: 0.00066; 19371/19969 tok/s;  30549 sec
[2020-06-19 07:19:48,447 INFO] Step 37000/250000; acc:  98.12; ppl:  1.05; xent: 0.05; lr: 0.00065; 19150/19592 tok/s;  31366 sec
[2020-06-19 07:32:34,551 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 07:33:32,540 INFO] Step 38000/250000; acc:  97.50; ppl:  1.07; xent: 0.06; lr: 0.00064; 19501/20002 tok/s;  32190 sec
[2020-06-19 07:47:01,783 INFO] Step 39000/250000; acc:  97.82; ppl:  1.06; xent: 0.06; lr: 0.00063; 18966/19418 tok/s;  32999 sec
[2020-06-19 08:00:33,737 INFO] Step 40000/250000; acc:  97.94; ppl:  1.06; xent: 0.06; lr: 0.00063; 19334/19805 tok/s;  33811 sec
[2020-06-19 08:00:34,098 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 08:03:22,961 INFO] Validation perplexity: 1.06356
[2020-06-19 08:03:22,962 INFO] Validation accuracy: 97.832
[2020-06-19 08:03:22,965 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_40000.pt
[2020-06-19 08:13:13,596 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 08:17:02,532 INFO] Step 41000/250000; acc:  96.77; ppl:  1.09; xent: 0.09; lr: 0.00062; 18116/18756 tok/s;  34800 sec
[2020-06-19 08:30:32,377 INFO] Step 42000/250000; acc:  97.32; ppl:  1.07; xent: 0.07; lr: 0.00061; 18516/19117 tok/s;  35610 sec
[2020-06-19 08:35:19,854 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 08:44:44,330 INFO] Step 43000/250000; acc:  98.10; ppl:  1.05; xent: 0.05; lr: 0.00060; 18919/19313 tok/s;  36462 sec
[2020-06-19 08:58:42,361 INFO] Step 44000/250000; acc:  97.63; ppl:  1.07; xent: 0.07; lr: 0.00060; 18428/18969 tok/s;  37300 sec
[2020-06-19 09:12:28,272 INFO] Step 45000/250000; acc:  97.53; ppl:  1.07; xent: 0.07; lr: 0.00059; 18442/18954 tok/s;  38126 sec
[2020-06-19 09:14:27,416 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 09:26:16,478 INFO] Step 46000/250000; acc:  97.70; ppl:  1.07; xent: 0.06; lr: 0.00058; 19305/19764 tok/s;  38954 sec
[2020-06-19 09:40:07,394 INFO] Step 47000/250000; acc:  98.04; ppl:  1.06; xent: 0.05; lr: 0.00058; 18434/18810 tok/s;  39785 sec
[2020-06-19 09:53:02,626 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 09:54:12,603 INFO] Step 48000/250000; acc:  97.50; ppl:  1.07; xent: 0.07; lr: 0.00057; 18893/19376 tok/s;  40630 sec
[2020-06-19 10:07:40,828 INFO] Step 49000/250000; acc:  97.87; ppl:  1.06; xent: 0.06; lr: 0.00056; 19467/19932 tok/s;  41438 sec
[2020-06-19 10:21:37,385 INFO] Step 50000/250000; acc:  96.73; ppl:  1.09; xent: 0.09; lr: 0.00056; 18793/19434 tok/s;  42275 sec
[2020-06-19 10:21:37,751 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 10:24:26,797 INFO] Validation perplexity: 1.06138
[2020-06-19 10:24:26,798 INFO] Validation accuracy: 97.9049
[2020-06-19 10:24:26,801 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_50000.pt
[2020-06-19 10:34:27,513 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 10:38:34,291 INFO] Step 51000/250000; acc:  97.99; ppl:  1.06; xent: 0.06; lr: 0.00055; 18391/18837 tok/s;  43292 sec
[2020-06-19 10:52:33,739 INFO] Step 52000/250000; acc:  98.05; ppl:  1.05; xent: 0.05; lr: 0.00055; 18938/19336 tok/s;  44131 sec
[2020-06-19 11:06:15,819 INFO] Step 53000/250000; acc:  97.31; ppl:  1.07; xent: 0.07; lr: 0.00054; 18719/19264 tok/s;  44953 sec
[2020-06-19 11:13:29,467 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 11:20:23,581 INFO] Step 54000/250000; acc:  98.61; ppl:  1.04; xent: 0.04; lr: 0.00054; 18541/18803 tok/s;  45801 sec
[2020-06-19 11:34:24,281 INFO] Step 55000/250000; acc:  98.22; ppl:  1.05; xent: 0.05; lr: 0.00053; 17604/17916 tok/s;  46642 sec
[2020-06-19 11:48:21,640 INFO] Step 56000/250000; acc:  98.14; ppl:  1.05; xent: 0.05; lr: 0.00053; 18484/18903 tok/s;  47479 sec
[2020-06-19 11:52:42,702 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 12:02:32,032 INFO] Step 57000/250000; acc:  97.96; ppl:  1.06; xent: 0.06; lr: 0.00052; 18355/18779 tok/s;  48330 sec
[2020-06-19 12:16:06,706 INFO] Step 58000/250000; acc:  97.73; ppl:  1.06; xent: 0.06; lr: 0.00052; 19168/19668 tok/s;  49144 sec
[2020-06-19 12:29:43,708 INFO] Step 59000/250000; acc:  96.81; ppl:  1.09; xent: 0.09; lr: 0.00051; 18093/18762 tok/s;  49961 sec
[2020-06-19 12:31:17,627 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 12:43:46,030 INFO] Step 60000/250000; acc:  98.12; ppl:  1.06; xent: 0.05; lr: 0.00051; 19137/19574 tok/s;  50804 sec
[2020-06-19 12:43:46,410 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 12:46:40,776 INFO] Validation perplexity: 1.05987
[2020-06-19 12:46:40,777 INFO] Validation accuracy: 97.9625
[2020-06-19 12:46:40,780 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_60000.pt
[2020-06-19 13:00:14,397 INFO] Step 61000/250000; acc:  98.29; ppl:  1.05; xent: 0.05; lr: 0.00051; 19355/19739 tok/s;  51792 sec
[2020-06-19 13:12:31,655 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 13:14:11,113 INFO] Step 62000/250000; acc:  98.00; ppl:  1.05; xent: 0.05; lr: 0.00050; 18730/19144 tok/s;  52629 sec
[2020-06-19 13:27:58,233 INFO] Step 63000/250000; acc:  97.94; ppl:  1.05; xent: 0.05; lr: 0.00050; 19203/19640 tok/s;  53456 sec
[2020-06-19 13:34:54,390 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 13:41:52,439 INFO] Step 64000/250000; acc:  98.30; ppl:  1.05; xent: 0.05; lr: 0.00049; 18893/19324 tok/s;  54290 sec
[2020-06-19 13:55:42,239 INFO] Step 65000/250000; acc:  97.91; ppl:  1.06; xent: 0.06; lr: 0.00049; 19042/19532 tok/s;  55120 sec
[2020-06-19 14:09:29,246 INFO] Step 66000/250000; acc:  98.08; ppl:  1.05; xent: 0.05; lr: 0.00049; 18172/18599 tok/s;  55947 sec
[2020-06-19 14:13:41,160 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 14:23:21,307 INFO] Step 67000/250000; acc:  98.64; ppl:  1.04; xent: 0.04; lr: 0.00048; 19317/19638 tok/s;  56779 sec
[2020-06-19 14:37:14,370 INFO] Step 68000/250000; acc:  98.23; ppl:  1.05; xent: 0.05; lr: 0.00048; 18344/18723 tok/s;  57612 sec
[2020-06-19 14:51:07,636 INFO] Step 69000/250000; acc:  97.76; ppl:  1.06; xent: 0.06; lr: 0.00048; 18732/19217 tok/s;  58445 sec
[2020-06-19 14:52:28,121 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 15:05:27,157 INFO] Step 70000/250000; acc:  98.09; ppl:  1.06; xent: 0.05; lr: 0.00047; 19051/19470 tok/s;  59305 sec
[2020-06-19 15:05:27,519 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 15:08:22,417 INFO] Validation perplexity: 1.05868
[2020-06-19 15:08:22,417 INFO] Validation accuracy: 97.996
[2020-06-19 15:08:22,421 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_70000.pt
[2020-06-19 15:22:13,316 INFO] Step 71000/250000; acc:  98.48; ppl:  1.04; xent: 0.04; lr: 0.00047; 18912/19246 tok/s;  60311 sec
[2020-06-19 15:34:32,135 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 15:36:16,131 INFO] Step 72000/250000; acc:  98.30; ppl:  1.05; xent: 0.05; lr: 0.00047; 19113/19487 tok/s;  61154 sec
[2020-06-19 15:50:00,462 INFO] Step 73000/250000; acc:  98.58; ppl:  1.04; xent: 0.04; lr: 0.00046; 18927/19270 tok/s;  61978 sec
[2020-06-19 16:03:37,505 INFO] Step 74000/250000; acc:  98.87; ppl:  1.03; xent: 0.03; lr: 0.00046; 19053/19331 tok/s;  62795 sec
[2020-06-19 16:12:55,380 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 16:17:26,943 INFO] Step 75000/250000; acc:  98.18; ppl:  1.05; xent: 0.05; lr: 0.00046; 19306/19743 tok/s;  63624 sec
[2020-06-19 16:31:29,920 INFO] Step 76000/250000; acc:  98.41; ppl:  1.05; xent: 0.04; lr: 0.00045; 18386/18760 tok/s;  64467 sec
[2020-06-19 16:45:36,531 INFO] Step 77000/250000; acc:  98.57; ppl:  1.04; xent: 0.04; lr: 0.00045; 17949/18296 tok/s;  65314 sec
[2020-06-19 16:52:08,026 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 16:59:29,717 INFO] Step 78000/250000; acc:  98.77; ppl:  1.04; xent: 0.04; lr: 0.00045; 19161/19508 tok/s;  66147 sec
[2020-06-19 17:13:28,740 INFO] Step 79000/250000; acc:  98.53; ppl:  1.04; xent: 0.04; lr: 0.00044; 18459/18796 tok/s;  66986 sec
[2020-06-19 17:27:35,942 INFO] Step 80000/250000; acc:  98.56; ppl:  1.04; xent: 0.04; lr: 0.00044; 18389/18732 tok/s;  67833 sec
[2020-06-19 17:27:36,335 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 17:30:43,359 INFO] Validation perplexity: 1.05824
[2020-06-19 17:30:43,360 INFO] Validation accuracy: 98.0287
[2020-06-19 17:30:43,363 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_80000.pt
[2020-06-19 17:34:32,536 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.6.pt, number of examples: 1000000
[2020-06-19 17:44:42,711 INFO] Step 81000/250000; acc:  97.86; ppl:  1.06; xent: 0.06; lr: 0.00044; 18088/18615 tok/s;  68860 sec
[2020-06-19 17:58:33,363 INFO] Step 82000/250000; acc:  97.86; ppl:  1.06; xent: 0.06; lr: 0.00044; 19227/19678 tok/s;  69691 sec
[2020-06-19 18:13:10,997 INFO] Step 83000/250000; acc:  97.80; ppl:  1.06; xent: 0.06; lr: 0.00043; 15367/15824 tok/s;  70569 sec
[2020-06-19 18:14:06,502 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.7.pt, number of examples: 574730
[2020-06-19 18:30:12,797 INFO] Step 84000/250000; acc:  98.39; ppl:  1.05; xent: 0.04; lr: 0.00043; 15822/16115 tok/s;  71590 sec
[2020-06-19 18:41:32,348 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.0.pt, number of examples: 1000000
[2020-06-19 18:47:26,979 INFO] Step 85000/250000; acc:  98.80; ppl:  1.03; xent: 0.03; lr: 0.00043; 15366/15616 tok/s;  72625 sec
[2020-06-19 19:04:28,036 INFO] Step 86000/250000; acc:  98.01; ppl:  1.06; xent: 0.05; lr: 0.00043; 15060/15434 tok/s;  73646 sec
[2020-06-19 19:21:37,599 INFO] Step 87000/250000; acc:  97.30; ppl:  1.08; xent: 0.07; lr: 0.00042; 15379/15882 tok/s;  74675 sec
[2020-06-19 19:29:39,995 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.1.pt, number of examples: 1000000
[2020-06-19 19:38:56,113 INFO] Step 88000/250000; acc:  97.75; ppl:  1.07; xent: 0.06; lr: 0.00042; 14778/15192 tok/s;  75714 sec
[2020-06-19 19:56:01,158 INFO] Step 89000/250000; acc:  98.01; ppl:  1.06; xent: 0.06; lr: 0.00042; 15898/16277 tok/s;  76739 sec
[2020-06-19 20:13:00,499 INFO] Step 90000/250000; acc:  98.06; ppl:  1.05; xent: 0.05; lr: 0.00042; 15694/16070 tok/s;  77758 sec
[2020-06-19 20:13:00,882 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 20:16:14,688 INFO] Validation perplexity: 1.05767
[2020-06-19 20:16:14,689 INFO] Validation accuracy: 98.0615
[2020-06-19 20:16:14,692 INFO] Saving checkpoint experiments/checkpoints/USPTO-full_pos_pred/USPTO-full_pos_pred_model_step_90000.pt
[2020-06-19 20:20:21,604 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.2.pt, number of examples: 1000000
[2020-06-19 20:31:51,322 INFO] Step 91000/250000; acc:  98.05; ppl:  1.06; xent: 0.05; lr: 0.00041; 16946/17306 tok/s;  78889 sec
[2020-06-19 20:47:05,792 INFO] Step 92000/250000; acc:  98.56; ppl:  1.04; xent: 0.04; lr: 0.00041; 16750/17022 tok/s;  79803 sec
[2020-06-19 21:02:15,473 INFO] Step 93000/250000; acc:  98.31; ppl:  1.05; xent: 0.05; lr: 0.00041; 17545/17905 tok/s;  80713 sec
[2020-06-19 21:03:04,438 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.3.pt, number of examples: 1000000
[2020-06-19 21:16:45,975 INFO] Step 94000/250000; acc:  98.01; ppl:  1.06; xent: 0.05; lr: 0.00041; 18076/18523 tok/s;  81584 sec
[2020-06-19 21:30:24,590 INFO] Step 95000/250000; acc:  98.15; ppl:  1.05; xent: 0.05; lr: 0.00041; 18933/19393 tok/s;  82402 sec
[2020-06-19 21:42:03,520 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.4.pt, number of examples: 1000000
[2020-06-19 21:44:21,252 INFO] Step 96000/250000; acc:  98.02; ppl:  1.06; xent: 0.06; lr: 0.00040; 19005/19485 tok/s;  83239 sec
[2020-06-19 21:58:37,121 INFO] Step 97000/250000; acc:  98.06; ppl:  1.05; xent: 0.05; lr: 0.00040; 17894/18299 tok/s;  84095 sec
[2020-06-19 22:12:52,309 INFO] Step 98000/250000; acc:  96.59; ppl:  1.09; xent: 0.09; lr: 0.00040; 17728/18393 tok/s;  84950 sec
[2020-06-19 22:22:03,871 INFO] Loading train dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.train.5.pt, number of examples: 1000000
[2020-06-19 22:27:28,360 INFO] Step 99000/250000; acc:  98.40; ppl:  1.05; xent: 0.05; lr: 0.00040; 18633/18997 tok/s;  85826 sec
[2020-06-19 22:41:28,638 INFO] Step 100000/250000; acc:  98.56; ppl:  1.04; xent: 0.04; lr: 0.00040; 18573/18955 tok/s;  86666 sec
[2020-06-19 22:41:29,017 INFO] Loading valid dataset from data/USPTO-full_pos_pred/USPTO-full_pos_pred.valid.0.pt, number of examples: 94688
[2020-06-19 22:44:39,584 INFO] Validation perplexity: 1.05674
[2020-06-19 22:44:39,584 INFO] Validation accuracy: 98.0708

this is my training script. all of my modification is changing rnn_size and word_vec_size to 512, otherwise it would lead to a error and could not run normally. your number of parameters were much less than mine. is your number of layers 13? i guess this is the difference.

dataset=uspto_full_pos_pred
python  train.py -data data/${dataset}/${dataset} \
                   -save_model experiments/checkpoints/${dataset}/${dataset}_model \
                   -seed 42 -gpu_ranks 2 -save_checkpoint_steps 10000 -keep_checkpoint 20 \
                   -train_steps 250000 -param_init 0  -param_init_glorot -max_generator_batches 32 \
                   -batch_size 4096 -batch_type tokens -normalization tokens -max_grad_norm 0  -accum_count 4 \
                   -optim adam -adam_beta1 0.9 -adam_beta2 0.998 -decay_method noam -warmup_steps 8000  \
                   -learning_rate 2 -label_smoothing 0.0 -report_every 1000 \
                   -layers 13 -rnn_size 512 -word_vec_size 512 -encoder_type transformer -decoder_type transformer \
                   -dropout 0.1 -position_encoding -share_embeddings \
                   -global_attention general -global_attention_function softmax -self_attn_type scaled-dot \
                   -heads 8 -transformer_ff 2048
wangxr0526 commented 1 year ago

Opennmt can implement other types of encoders and decoders. The rnn_size parameter seems to have nothing to do with the transformer. The model parameters I use are layers=13, but word_vec_size=256. Here are the details of my model:


nb_gpu 3
opt_d_model 500
[2020-06-19 22:21:05,570 INFO] Loading train dataset from data/USPTO-full_S2R/USPTO-full_S2R.train.0.pt, number of examples: 1000000
[2020-06-19 22:21:05,570 INFO]  * vocabulary size. source = 401; target = 401
[2020-06-19 22:21:05,571 INFO] Building model...
opt_d_model 256
[2020-06-19 22:21:19,207 INFO] NMTModel(
  (encoder): TransformerEncoder(
    (embeddings): Embeddings(
      (make_embedding): Sequential(
        (emb_luts): Elementwise(
          (0): Embedding(401, 256, padding_idx=1)
        )
        (pe): PositionalEncoding(
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
    )
    (transformer): ModuleList(
      (0): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (1): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (2): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (3): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (4): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (5): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (6): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (7): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (8): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (9): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (10): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (11): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (12): TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm()
        (dropout): Dropout(p=0.1, inplace=False)
      )
    )
    (layer_norm): LayerNorm()
  )
  (decoder): TransformerDecoder(
    (embeddings): Embeddings(
      (make_embedding): Sequential(
        (emb_luts): Elementwise(
          (0): Embedding(401, 256, padding_idx=1)
        )
        (pe): PositionalEncoding(
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
    )
    (transformer_layers): ModuleList(
      (0): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (1): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (2): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (3): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (4): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (5): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (6): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (7): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (8): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (9): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (10): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (11): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
      (12): TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=256, out_features=256, bias=True)
          (linear_values): Linear(in_features=256, out_features=256, bias=True)
          (linear_query): Linear(in_features=256, out_features=256, bias=True)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=256, out_features=256, bias=True)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=256, out_features=2048, bias=True)
          (w_2): Linear(in_features=2048, out_features=256, bias=True)
          (layer_norm): LayerNorm()
          (dropout_1): Dropout(p=0.1, inplace=False)
          (relu): ReLU()
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm()
        (layer_norm_2): LayerNorm()
        (drop): Dropout(p=0.1, inplace=False)
      )
    )
    (layer_norm): LayerNorm()
  )
  (generator): Sequential(
    (0): Linear(in_features=256, out_features=401, bias=True)
    (1): LogSoftmax()
  )
)
[2020-06-19 22:21:19,214 INFO] encoder: 17199104
[2020-06-19 22:21:19,214 INFO] decoder: 20627345
[2020-06-19 22:21:19,214 INFO] * number of parameters: 37826449
LaHauzel commented 1 year ago

By setting the word_vec_size to 256, i got the similar number of parameters!
~17m and ~20m for encoder and decoder respectively. I'll see if this could solve the problem!

LaHauzel commented 1 year ago

After changing the word_vec_size to 256, the training accuracy kept increasing for the P2S model. However, the accuracy still dropped rapidly for the S2R model.

Could you provide the full command of training a S2R model for USPTO_FULL dataset? I am sure that the model architecture is the same as yours but not sure about other hyperparameters.