awslabs / sockeye

Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
https://awslabs.github.io/sockeye/
Apache License 2.0
1.21k stars 324 forks source link

ppl improved too much after evaluation on valid set #694

Closed zhangwb16 closed 5 years ago

zhangwb16 commented 5 years ago

Hi, Whenever an evaluation is performed on the validation set, the ppl on the training set will increase immediately.

[INFO:sockeye.training] Epoch[0] Batch [2900] Speed: 186.94 samples/sec 8556.4 0 tokens/sec 1.59 updates/sec perplexity=38.950971 [INFO:sockeye.training] Epoch[0] Batch [2950] Speed: 189.95 samples/sec 8499.6 2 tokens/sec 1.59 updates/sec perplexity=38.614771 [INFO:sockeye.training] Epoch[0] Batch [3000] Speed: 154.75 samples/sec 8548.6 8 tokens/sec 1.60 updates/sec perplexity=38.275190 [INFO:root] Saved params to "/home/paper/experiment/uych_merge_num/model/para ms.00003" [INFO:sockeye.training] Checkpoint [3] Updates=3000 Epoch=0 Samples=315607 Time -cost=698.434 Updates/sec=1.432 [INFO:sockeye.training] Checkpoint [3] Train-perplexity=38.275190 [INFO:sockeye.training] Checkpoint [3] Validation-perplexity=19.245108 [INFO:sockeye.utils] GPU 1: 8598/11171 MB (76.97%) [INFO:sockeye.training] Decoder-2 finished: {'bleu-val': 0.05292739742353391, 'c hrf-val': 0.17296305754473082, 'rouge1-val': 0.24738774955513881, 'rouge2-val': 0.08921015787118985, 'rougel-val': 0.1775845654294898, 'avg-sec-per-sent-val': 0 .39757008722850257, 'decode-walltime-val': 278.2990610599518} [INFO:sockeye.training] Starting process: Decoder-3 [INFO:sockeye.training] Validation-perplexity improved to 19.245108 (delta=10.92 0232). [INFO:sockeye.inference] Loading 1 model(s) from ['model'] ... [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.src.0.jso n" [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.trg.0.jso n" [INFO:sockeye.inference] Model version: 1.18.67 [INFO:sockeye.model] ModelConfig loaded from "model/config" [INFO:sockeye.model] Config[_frozen=True, config_data=Config[frozen=True, data statistics=Config[_frozen=True, average_len_target_per_bucket=[5.520092378752904 , 11.426037474687869, 16.362553665129756, 22.42042469915727, 28.65909090909081, 34.65383204090136, 40.336549905110886, 44.7546207775656, 49.76857585139321, 54.5 53170028818265, 59.01535363705011, 64.08560794044665, 69.42162162162153, 72.4439 6920444831, 78.45221445221455, 80.7770700636944, 84.32739420935404, 88.536144578 3133, 96.37974683544299, 102.23428571428572, 143.85000000000002, 150.76923076923 077, 157.66666666666663, 163.0, 169.5, 178.0, 186.0, 194.0, 200.0], buckets=[(10 , 7), (20, 14), (30, 21), (40, 28), (50, 35), (60, 42), (70, 49), (80, 56), (90, 63), (100, 70), (110, 77), (120, 84), (130, 91), (140, 98), (150, 105), (160, 1 12), (170, 119), (180, 126), (190, 133), (200, 140), (200, 147), (200, 154), (20 0, 161), (200, 168), (200, 175), (200, 182), (200, 189), (200, 196), (200, 200)] , length_ratio_mean=0.6974226281947343, length_ratio_std=0.25354017448310184, ma x_observed_len_source=200, max_observed_len_target=200, num_discarded=651, num_s ents=335745, num_sents_per_bucket=[4330, 39013, 62890, 58253, 49940, 40879, 3108 9, 18828, 11628, 6940, 3973, 2418, 1665, 1169, 858, 628, 449, 332, 237, 175, 20, 13, 9, 2, 2, 2, 1, 1, 1], num_tokens_source=14661026, num_tokens_target=9530172 , num_unks_source=0, num_unks_target=0, size_vocab_source=11295, size_vocab_targ et=11295], max_seq_len_source=200, max_seq_len_target=200, num_source_factors=1, source_with_eos=True], config_decoder=Config[_frozen=True, act_type=relu, atten tion_heads=8, conv_config=None, dropout_act=0.1, dropoutattention=0.1, dropout prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lhuc=False, max_seq_le n_source=200, max_seq_len_target=200, model_size=512, num_layers=6, positional_e mbedding_type=fixed, postprocess_sequence=dr, preprocess_sequence=n, use_lhuc=Fa lse], config_embed_source=Config[_frozen=True, dropout=0.0, dtype=float32, facto r_configs=None, num_embed=512, num_factors=1, vocab_size=11295], config_embed_ta rget=Config[_frozen=True, dropout=0.0, dtype=float32, factor_configs=None, num_e mbed=512, num_factors=1, vocab_size=11295], config_encoder=Config[_frozen=True, act_type=relu, attention_heads=8, conv_config=None, dropout_act=0.1, dropout_att ention=0.1, dropout_prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lh uc=False, max_seq_len_source=200, max_seq_len_target=200, model_size=512, num_la yers=6, positional_embedding_type=fixed, postprocess_sequence=dr, preprocess_seq uence=n, use_lhuc=False], config_loss=Config[_frozen=True, label_smoothing=0.1, name=cross-entropy, normalization_type=valid, vocab_size=11295], lhuc=False, voc ab_source_size=11295, vocab_target_size=11295, weight_normalization=False, weigh t_tying=False, weight_tying_type=None] [INFO:sockeye.encoder] sockeye.encoder.EncoderSequence dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: floa t32 [INFO:sockeye.encoder] sockeye.encoder.TransformerEncoder dtype: float32 [INFO:sockeye.decoder] sockeye.decoder.TransformerDecoder dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: floa t32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.model] Loaded params from "model/params.00003" [INFO:sockeye.inference] 1 model(s) loaded in 1.5984s [INFO:sockeye.inference] Translator (1 model(s) beam_size=5 beamprune=off beam search_stop=all nbest_size=1 ensemble_mode=None batch_size=16 buckets_source=[10 , 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 1 90, 200] avoiding=0) [INFO:sockeye.training] Epoch[0] Batch [3050] Speed: 112.57 samples/sec 5367.9 3 tokens/sec 1.01 updates/sec perplexity=31.828867 [INFO:sockeye.training] Epoch[0] Batch [3100] Speed: 113.93 samples/sec 6081.4 4 tokens/sec 1.13 updates/sec perplexity=31.630315 [INFO:sockeye.training] Epoch[0] Batch [3150] Speed: 142.17 samples/sec 6323.8 6 tokens/sec 1.19 updates/sec perplexity=30.941710 [INFO:sockeye.training] Epoch[1] Batch [3200] Speed: 118.41 samples/sec 6318.5 4 tokens/sec 1.18 updates/sec perplexity=30.576838 [INFO:sockeye.training] Epoch[1] Batch [3250] Speed: 133.68 samples/sec 6476.7 6 tokens/sec 1.22 updates/sec perplexity=29.923091 [INFO:sockeye.training] Epoch[1] Batch [3300] Speed: 142.30 samples/sec 7796.6 8 tokens/sec 1.45 updates/sec perplexity=29.473221 [INFO:sockeye.training] Epoch[1] Batch [3350] Speed: 183.60 samples/sec 8649.3 5 tokens/sec 1.61 updates/sec perplexity=29.083229 [INFO:sockeye.training] Epoch[1] Batch [3400] Speed: 166.27 samples/sec 8560.9 9 tokens/sec 1.60 updates/sec perplexity=28.765739 [INFO:sockeye.training] Epoch[1] Batch [3450] Speed: 167.16 samples/sec 8566.5 6 tokens/sec 1.61 updates/sec perplexity=28.536419 [INFO:sockeye.training] Epoch[1] Batch [3500] Speed: 166.70 samples/sec 8586.9 9 tokens/sec 1.62 updates/sec perplexity=28.242006 [INFO:sockeye.training] Epoch[1] Batch [3550] Speed: 147.64 samples/sec 8500.2 6 tokens/sec 1.58 updates/sec perplexity=28.028866 [INFO:sockeye.training] Epoch[1] Batch [3600] Speed: 178.50 samples/sec 8504.9 3 tokens/sec 1.58 updates/sec perplexity=27.796689 [INFO:sockeye.training] Epoch[1] Batch [3650] Speed: 181.70 samples/sec 8538.3 9 tokens/sec 1.59 updates/sec perplexity=27.664975 [INFO:sockeye.training] Epoch[1] Batch [3700] Speed: 166.81 samples/sec 8578.8 3 tokens/sec 1.60 updates/sec perplexity=27.461362 [INFO:sockeye.training] Epoch[1] Batch [3750] Speed: 170.80 samples/sec 8566.0 2 tokens/sec 1.60 updates/sec perplexity=27.269973 [INFO:sockeye.training] Epoch[1] Batch [3800] Speed: 159.32 samples/sec 8548.7 8 tokens/sec 1.61 updates/sec perplexity=27.027031 [INFO:sockeye.training] Epoch[1] Batch [3850] Speed: 152.18 samples/sec 8512.4 3 tokens/sec 1.60 updates/sec perplexity=26.833334 [INFO:sockeye.training] Epoch[1] Batch [3900] Speed: 154.03 samples/sec 8446.3 1 tokens/sec 1.57 updates/sec perplexity=26.633254 [INFO:sockeye.training] Epoch[1] Batch [3950] Speed: 149.17 samples/sec 8410.6 2 tokens/sec 1.57 updates/sec perplexity=26.450253 [INFO:sockeye.training] Epoch[1] Batch [4000] Speed: 166.03 samples/sec 8524.1 9 tokens/sec 1.60 updates/sec perplexity=26.299966 [INFO:root] Saved params to "/home/paper/experiment/uych_merge_num/model/para ms.00004" [INFO:sockeye.training] Checkpoint [4] Updates=4000 Epoch=1 Samples=420011 Time -cost=686.836 Updates/sec=1.456 [INFO:sockeye.training] Checkpoint [4] Train-perplexity=26.299966 [INFO:sockeye.training] Checkpoint [4] Validation-perplexity=14.183798 [INFO:sockeye.utils] GPU 1: 8598/11171 MB (76.97%) [INFO:sockeye.training] Decoder-3 finished: {'bleu-val': 0.11341598166195394, 'c hrf-val': 0.23644202627442432, 'rouge1-val': 0.35400646849530715, 'rouge2-val': 0.16386822063004236, 'rougel-val': 0.2779966421122925, 'avg-sec-per-sent-val': 0 .32222388778414046, 'decode-walltime-val': 225.55672144889832} [INFO:sockeye.training] Starting process: Decoder-4 [INFO:sockeye.training] Validation-perplexity improved to 14.183798 (delta=5.061 309). [INFO:sockeye.inference] Loading 1 model(s) from ['model'] ... [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.src.0.jso n" [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.trg.0.jso n" [INFO:sockeye.inference] Model version: 1.18.67 [INFO:sockeye.model] ModelConfig loaded from "model/config" [INFO:sockeye.model] Config[_frozen=True, config_data=Config[frozen=True, data statistics=Config[_frozen=True, average_len_target_per_bucket=[5.520092378752904 , 11.426037474687869, 16.362553665129756, 22.42042469915727, 28.65909090909081, 34.65383204090136, 40.336549905110886, 44.7546207775656, 49.76857585139321, 54.5 53170028818265, 59.01535363705011, 64.08560794044665, 69.42162162162153, 72.4439 6920444831, 78.45221445221455, 80.7770700636944, 84.32739420935404, 88.536144578 3133, 96.37974683544299, 102.23428571428572, 143.85000000000002, 150.76923076923 077, 157.66666666666663, 163.0, 169.5, 178.0, 186.0, 194.0, 200.0], buckets=[(10 , 7), (20, 14), (30, 21), (40, 28), (50, 35), (60, 42), (70, 49), (80, 56), (90, 63), (100, 70), (110, 77), (120, 84), (130, 91), (140, 98), (150, 105), (160, 1 12), (170, 119), (180, 126), (190, 133), (200, 140), (200, 147), (200, 154), (20 0, 161), (200, 168), (200, 175), (200, 182), (200, 189), (200, 196), (200, 200)] , length_ratio_mean=0.6974226281947343, length_ratio_std=0.25354017448310184, ma x_observed_len_source=200, max_observed_len_target=200, num_discarded=651, num_s ents=335745, num_sents_per_bucket=[4330, 39013, 62890, 58253, 49940, 40879, 3108 9, 18828, 11628, 6940, 3973, 2418, 1665, 1169, 858, 628, 449, 332, 237, 175, 20, 13, 9, 2, 2, 2, 1, 1, 1], num_tokens_source=14661026, num_tokens_target=9530172 , num_unks_source=0, num_unks_target=0, size_vocab_source=11295, size_vocab_targ et=11295], max_seq_len_source=200, max_seq_len_target=200, num_source_factors=1, source_with_eos=True], config_decoder=Config[_frozen=True, act_type=relu, atten tion_heads=8, conv_config=None, dropout_act=0.1, dropoutattention=0.1, dropout prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lhuc=False, max_seq_le n_source=200, max_seq_len_target=200, model_size=512, num_layers=6, positional_e mbedding_type=fixed, postprocess_sequence=dr, preprocess_sequence=n, use_lhuc=Fa lse], config_embed_source=Config[_frozen=True, dropout=0.0, dtype=float32, facto r_configs=None, num_embed=512, num_factors=1, vocab_size=11295], config_embed_ta rget=Config[_frozen=True, dropout=0.0, dtype=float32, factor_configs=None, num_e mbed=512, num_factors=1, vocab_size=11295], config_encoder=Config[_frozen=True, act_type=relu, attention_heads=8, conv_config=None, dropout_act=0.1, dropout_att ention=0.1, dropout_prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lh uc=False, max_seq_len_source=200, max_seq_len_target=200, model_size=512, num_la yers=6, positional_embedding_type=fixed, postprocess_sequence=dr, preprocess_seq uence=n, use_lhuc=False], config_loss=Config[_frozen=True, label_smoothing=0.1, name=cross-entropy, normalization_type=valid, vocab_size=11295], lhuc=False, voc ab_source_size=11295, vocab_target_size=11295, weight_normalization=False, weigh t_tying=False, weight_tying_type=None] [INFO:sockeye.encoder] sockeye.encoder.EncoderSequence dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: floa t32 [INFO:sockeye.encoder] sockeye.encoder.TransformerEncoder dtype: float32 [INFO:sockeye.decoder] sockeye.decoder.TransformerDecoder dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: floa t32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.model] Loaded params from "model/params.00004" [INFO:sockeye.inference] 1 model(s) loaded in 1.6600s [INFO:sockeye.inference] Translator (1 model(s) beam_size=5 beamprune=off beam search_stop=all nbest_size=1 ensemble_mode=None batch_size=16 buckets_source=[10 , 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 1 90, 200] avoiding=0) [INFO:sockeye.training] Epoch[1] Batch [4050] Speed: 101.47 samples/sec 5408.09 tokens/sec 1.01 updates/sec perplexity=22.481330 [INFO:sockeye.training] Epoch[1] Batch [4100] Speed: 127.75 samples/sec 6254.34 tokens/sec 1.17 updates/sec perplexity=22.391598 [INFO:sockeye.training] Epoch[1] Batch [4150] Speed: 129.80 samples/sec 6390.24 tokens/sec 1.20 updates/sec perplexity=22.322329 [INFO:sockeye.training] Epoch[1] Batch [4200] Speed: 125.00 samples/sec 6410.08 tokens/sec 1.20 updates/sec perplexity=22.211722 [INFO:sockeye.training] Epoch[1] Batch [4250] Speed: 159.01 samples/sec 7992.97 tokens/sec 1.50 updates/sec perplexity=22.180405 [INFO:sockeye.training] Epoch[1] Batch [4300] Speed: 170.19 samples/sec 8603.74 tokens/sec 1.61 updates/sec perplexity=22.110387 [INFO:sockeye.training] Epoch[1] Batch [4350] Speed: 182.02 samples/sec 8649.14 tokens/sec 1.62 updates/sec perplexity=21.983482 [INFO:sockeye.training] Epoch[1] Batch [4400] Speed: 165.26 samples/sec 8424.14 tokens/sec 1.56 updates/sec perplexity=21.828569 [INFO:sockeye.training] Epoch[1] Batch [4450] Speed: 172.81 samples/sec 8626.56 tokens/sec 1.61 updates/sec perplexity=21.729114 [INFO:sockeye.training] Epoch[1] Batch [4500] Speed: 187.41 samples/sec 8588.88 tokens/sec 1.61 updates/sec perplexity=21.625711 [INFO:sockeye.training] Epoch[1] Batch [4550] Speed: 169.88 samples/sec 8548.77 tokens/sec 1.61 updates/sec perplexity=21.553508 [INFO:sockeye.training] Epoch[1] Batch [4600] Speed: 150.42 samples/sec 8549.27 tokens/sec 1.60 updates/sec perplexity=21.439781 [INFO:sockeye.training] Epoch[1] Batch [4650] Speed: 198.12 samples/sec 8651.08 tokens/sec 1.62 updates/sec perplexity=21.360640 [INFO:sockeye.training] Epoch[1] Batch [4700] Speed: 174.84 samples/sec 8633.33 tokens/sec 1.61 updates/sec perplexity=21.230967 [INFO:sockeye.training] Epoch[1] Batch [4750] Speed: 143.73 samples/sec 8482.19 tokens/sec 1.58 updates/sec perplexity=21.098202 [INFO:sockeye.training] Epoch[1] Batch [4800] Speed: 153.39 samples/sec 8440.01 tokens/sec 1.58 updates/sec perplexity=20.978014 [INFO:sockeye.training] Epoch[1] Batch [4850] Speed: 180.45 samples/sec 8641.11 tokens/sec 1.62 updates/sec perplexity=20.862696 [INFO:sockeye.training] Epoch[1] Batch [4900] Speed: 158.91 samples/sec 8530.49 tokens/sec 1.59 updates/sec perplexity=20.771210 [INFO:sockeye.training] Epoch[1] Batch [4950] Speed: 179.04 samples/sec 8596.83 tokens/sec 1.60 updates/sec perplexity=20.649071 [INFO:sockeye.training] Epoch[1] Batch [5000] Speed: 159.54 samples/sec 8451.92 tokens/sec 1.58 updates/sec perplexity=20.554147 [INFO:root] Saved params to "/home/paper/experiment/uych_merge_num/model/params.00005" [INFO:sockeye.training] Checkpoint [5] Updates=5000 Epoch=1 Samples=525888 Time-cost=671.687 Updates/sec=1.489 [INFO:sockeye.training] Checkpoint [5] Train-perplexity=20.554147 [INFO:sockeye.training] Checkpoint [5] Validation-perplexity=11.020620 [INFO:sockeye.utils] GPU 1: 8598/11171 MB (76.97%) [INFO:sockeye.training] Decoder-4 finished: {'bleu-val': 0.16741595894554867, 'chrf-val': 0.2901495806930786, 'rouge1-val': 0.4310877843275138, 'rouge2-val': 0.22895202040718612, 'rougel-val': 0.34434505173980173, 'avg-sec-per-sent-val': 0.25609504325049265, 'decode-walltime-val': 179.26653027534485} [INFO:sockeye.training] Starting process: Decoder-5 [INFO:sockeye.training] Validation-perplexity improved to 11.020620 (delta=3.163178). [INFO:sockeye.inference] Loading 1 model(s) from ['model'] ... [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.src.0.json" [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.trg.0.json" [INFO:sockeye.inference] Model version: 1.18.67 [INFO:sockeye.model] ModelConfig loaded from "model/config" [INFO:sockeye.model] Config[_frozen=True, config_data=Config[_frozen=True, data_statistics=Config[_frozen=True, average_len_target_per_bucket=[5.520092378752904, 11.426037474687869, 16.362553665129756, 22.42042469915727, 28.65909090909081, 34.65383204090136, 40.336549905110886, 44.7546207775656, 49.76857585139321, 54.553170028818265, 59.01535363705011, 64.08560794044665, 69.42162162162153, 72.44396920444831, 78.45221445221455, 80.7770700636944, 84.32739420935404, 88.5361445783133, 96.37974683544299, 102.23428571428572, 143.85000000000002, 150.76923076923077, 157.66666666666663, 163.0, 169.5, 178.0, 186.0, 194.0, 200.0], buckets=[(10, 7), (20, 14), (30, 21), (40, 28), (50, 35), (60, 42), (70, 49), (80, 56), (90, 63), (100, 70), (110, 77), (120, 84), (130, 91), (140, 98), (150, 105), (160, 112), (170, 119), (180, 126), (190, 133), (200, 140), (200, 147), (200, 154), (200, 161), (200, 168), (200, 175), (200, 182), (200, 189), (200, 196), (200, 200)], length_ratio_mean=0.6974226281947343, length_ratio_std=0.25354017448310184, max_observed_len_source=200, max_observed_len_target=200, num_discarded=651, num_sents=335745, num_sents_per_bucket=[4330, 39013, 62890, 58253, 49940, 40879, 31089, 18828, 11628, 6940, 3973, 2418, 1665, 1169, 858, 628, 449, 332, 237, 175, 20, 13, 9, 2, 2, 2, 1, 1, 1], num_tokens_source=14661026, num_tokens_target=9530172, num_unks_source=0, num_unks_target=0, size_vocab_source=11295, size_vocab_target=11295], max_seq_len_source=200, max_seq_len_target=200, num_source_factors=1, source_with_eos=True], config_decoder=Config[_frozen=True, act_type=relu, attention_heads=8, conv_config=None, dropout_act=0.1, dropout_attention=0.1, dropout_prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lhuc=False, max_seq_len_source=200, max_seq_len_target=200, model_size=512, num_layers=6, positional_embedding_type=fixed, postprocess_sequence=dr, preprocess_sequence=n, use_lhuc=False], config_embed_source=Config[_frozen=True, dropout=0.0, dtype=float32, factor_configs=None, num_embed=512, num_factors=1, vocab_size=11295], config_embed_target=Config[_frozen=True, dropout=0.0, dtype=float32, factor_configs=None, num_embed=512, num_factors=1, vocab_size=11295], config_encoder=Config[_frozen=True, act_type=relu, attention_heads=8, conv_config=None, dropout_act=0.1, dropout_attention=0.1, dropout_prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lhuc=False, max_seq_len_source=200, max_seq_len_target=200, model_size=512, num_layers=6, positional_embedding_type=fixed, postprocess_sequence=dr, preprocess_sequence=n, use_lhuc=False], config_loss=Config[_frozen=True, label_smoothing=0.1, name=cross-entropy, normalization_type=valid, vocab_size=11295], lhuc=False, vocab_source_size=11295, vocab_target_size=11295, weight_normalization=False, weight_tying=False, weight_tying_type=None] [INFO:sockeye.encoder] sockeye.encoder.EncoderSequence dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.TransformerEncoder dtype: float32 [INFO:sockeye.decoder] sockeye.decoder.TransformerDecoder dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.model] Loaded params from "model/params.00005" [INFO:sockeye.inference] 1 model(s) loaded in 1.6807s [INFO:sockeye.inference] Translator (1 model(s) beam_size=5 beam_prune=off beam_search_stop=all nbest_size=1 ensemble_mode=None batch_size=16 buckets_source=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200] avoiding=0) [INFO:sockeye.training] Epoch[1] Batch [5050] Speed: 101.52 samples/sec 5245.87 tokens/sec 0.98 updates/sec perplexity=18.868604 [INFO:sockeye.training] Epoch[1] Batch [5100] Speed: 133.69 samples/sec 6272.03 tokens/sec 1.18 updates/sec perplexity=18.640857 [INFO:sockeye.training] Epoch[1] Batch [5150] Speed: 129.17 samples/sec 6290.80 tokens/sec 1.17 updates/sec perplexity=18.600625 [INFO:sockeye.training] Epoch[1] Batch [5200] Speed: 127.78 samples/sec 6465.12 tokens/sec 1.21 updates/sec perplexity=18.474325 [INFO:sockeye.training] Epoch[1] Batch [5250] Speed: 169.96 samples/sec 7570.23 tokens/sec 1.42 updates/sec perplexity=18.374515 [INFO:sockeye.training] Epoch[1] Batch [5300] Speed: 184.67 samples/sec 8621.13 tokens/sec 1.62 updates/sec perplexity=18.146411 [INFO:sockeye.training] Epoch[1] Batch [5350] Speed: 161.82 samples/sec 8399.51 tokens/sec 1.57 updates/sec perplexity=18.197097 [INFO:sockeye.training] Epoch[1] Batch [5400] Speed: 225.47 samples/sec 8734.48 tokens/sec 1.64 updates/sec perplexity=18.161662 [INFO:sockeye.training] Epoch[1] Batch [5450] Speed: 164.84 samples/sec 8442.66 tokens/sec 1.60 updates/sec perplexity=18.200621 [INFO:sockeye.training] Epoch[1] Batch [5500] Speed: 177.87 samples/sec 8614.99 tokens/sec 1.63 updates/sec perplexity=18.199061 [INFO:sockeye.training] Epoch[1] Batch [5550] Speed: 164.67 samples/sec 8565.56 tokens/sec 1.61 updates/sec perplexity=18.172386 [INFO:sockeye.training] Epoch[1] Batch [5600] Speed: 153.85 samples/sec 8481.75 tokens/sec 1.59 updates/sec perplexity=18.100915 [INFO:sockeye.training] Epoch[1] Batch [5650] Speed: 152.06 samples/sec 8492.39 tokens/sec 1.59 updates/sec perplexity=18.041008 [INFO:sockeye.training] Epoch[1] Batch [5700] Speed: 163.33 samples/sec 8484.80 tokens/sec 1.61 updates/sec perplexity=17.967594 [INFO:sockeye.training] Epoch[1] Batch [5750] Speed: 171.00 samples/sec 8486.19 tokens/sec 1.60 updates/sec perplexity=17.929403 [INFO:sockeye.training] Epoch[1] Batch [5800] Speed: 185.09 samples/sec 8671.95 tokens/sec 1.62 updates/sec perplexity=17.815732 [INFO:sockeye.training] Epoch[1] Batch [5850] Speed: 163.38 samples/sec 8571.77 tokens/sec 1.60 updates/sec perplexity=17.719172 [INFO:sockeye.training] Epoch[1] Batch [5900] Speed: 154.12 samples/sec 8494.70 tokens/sec 1.58 updates/sec perplexity=17.658369 [INFO:sockeye.training] Epoch[1] Batch [5950] Speed: 157.95 samples/sec 8542.22 tokens/sec 1.60 updates/sec perplexity=17.526942 [INFO:sockeye.training] Epoch[1] Batch [6000] Speed: 171.26 samples/sec 8627.36 tokens/sec 1.61 updates/sec perplexity=17.432637 [INFO:root] Saved params to "/home/paper/experiment/uych_merge_num/model/params.00006" [INFO:sockeye.training] Checkpoint [6] Updates=6000 Epoch=1 Samples=633045 Time-cost=674.813 Updates/sec=1.482 [INFO:sockeye.training] Checkpoint [6] Train-perplexity=17.432637 [INFO:sockeye.training] Checkpoint [6] Validation-perplexity=8.725789 [INFO:sockeye.utils] GPU 1: 8599/11171 MB (76.98%) [INFO:sockeye.training] Decoder-5 finished: {'bleu-val': 0.20824514447505457, 'chrf-val': 0.3274951083938135, 'rouge1-val': 0.48644458565135634, 'rouge2-val': 0.28553426806547455, 'rougel-val': 0.40379504584472015, 'avg-sec-per-sent-val': 0.2709647229739598, 'decode-walltime-val': 189.67530608177185} [INFO:sockeye.training] Starting process: Decoder-6 [INFO:sockeye.training] Validation-perplexity improved to 8.725789 (delta=2.294831). [INFO:sockeye.inference] Loading 1 model(s) from ['model'] ... [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.src.0.json" [INFO:sockeye.vocab] Vocabulary (11295 words) loaded from "model/vocab.trg.0.json" [INFO:sockeye.inference] Model version: 1.18.67 [INFO:sockeye.model] ModelConfig loaded from "model/config" [INFO:sockeye.model] Config[_frozen=True, config_data=Config[_frozen=True, data_statistics=Config[_frozen=True, average_len_target_per_bucket=[5.520092378752904, 11.426037474687869, 16.362553665129756, 22.42042469915727, 28.65909090909081, 34.65383204090136, 40.336549905110886, 44.7546207775656, 49.76857585139321, 54.553170028818265, 59.01535363705011, 64.08560794044665, 69.42162162162153, 72.44396920444831, 78.45221445221455, 80.7770700636944, 84.32739420935404, 88.5361445783133, 96.37974683544299, 102.23428571428572, 143.85000000000002, 150.76923076923077, 157.66666666666663, 163.0, 169.5, 178.0, 186.0, 194.0, 200.0], buckets=[(10, 7), (20, 14), (30, 21), (40, 28), (50, 35), (60, 42), (70, 49), (80, 56), (90, 63), (100, 70), (110, 77), (120, 84), (130, 91), (140, 98), (150, 105), (160, 112), (170, 119), (180, 126), (190, 133), (200, 140), (200, 147), (200, 154), (200, 161), (200, 168), (200, 175), (200, 182), (200, 189), (200, 196), (200, 200)], length_ratio_mean=0.6974226281947343, length_ratio_std=0.25354017448310184, max_observed_len_source=200, max_observed_len_target=200, num_discarded=651, num_sents=335745, num_sents_per_bucket=[4330, 39013, 62890, 58253, 49940, 40879, 31089, 18828, 11628, 6940, 3973, 2418, 1665, 1169, 858, 628, 449, 332, 237, 175, 20, 13, 9, 2, 2, 2, 1, 1, 1], num_tokens_source=14661026, num_tokens_target=9530172, num_unks_source=0, num_unks_target=0, size_vocab_source=11295, size_vocab_target=11295], max_seq_len_source=200, max_seq_len_target=200, num_source_factors=1, source_with_eos=True], config_decoder=Config[_frozen=True, act_type=relu, attention_heads=8, conv_config=None, dropout_act=0.1, dropout_attention=0.1, dropout_prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lhuc=False, max_seq_len_source=200, max_seq_len_target=200, model_size=512, num_layers=6, positional_embedding_type=fixed, postprocess_sequence=dr, preprocess_sequence=n, use_lhuc=False], config_embed_source=Config[_frozen=True, dropout=0.0, dtype=float32, factor_configs=None, num_embed=512, num_factors=1, vocab_size=11295], config_embed_target=Config[_frozen=True, dropout=0.0, dtype=float32, factor_configs=None, num_embed=512, num_factors=1, vocab_size=11295], config_encoder=Config[_frozen=True, act_type=relu, attention_heads=8, conv_config=None, dropout_act=0.1, dropout_attention=0.1, dropout_prepost=0.1, dtype=float32, feed_forward_num_hidden=2048, lhuc=False, max_seq_len_source=200, max_seq_len_target=200, model_size=512, num_layers=6, positional_embedding_type=fixed, postprocess_sequence=dr, preprocess_sequence=n, use_lhuc=False], config_loss=Config[_frozen=True, label_smoothing=0.1, name=cross-entropy, normalization_type=valid, vocab_size=11295], lhuc=False, vocab_source_size=11295, vocab_target_size=11295, weight_normalization=False, weight_tying=False, weight_tying_type=None] [INFO:sockeye.encoder] sockeye.encoder.EncoderSequence dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.TransformerEncoder dtype: float32 [INFO:sockeye.decoder] sockeye.decoder.TransformerDecoder dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.AddSinCosPositionalEmbeddings dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.encoder] sockeye.encoder.Embedding dtype: float32 [INFO:sockeye.model] Loaded params from "model/params.00006" [INFO:sockeye.inference] 1 model(s) loaded in 1.6360s [INFO:sockeye.inference] Translator (1 model(s) beam_size=5 beam_prune=off beam_search_stop=all nbest_size=1 ensemble_mode=None batch_size=16 buckets_source=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200] avoiding=0) [INFO:sockeye.training] Epoch[1] Batch [6050] Speed: 98.00 samples/sec 5275.91 tokens/sec 0.98 updates/sec perplexity=15.771333 [INFO:sockeye.training] Epoch[1] Batch [6100] Speed: 133.49 samples/sec 6195.18 tokens/sec 1.15 updates/sec perplexity=15.837324 [INFO:sockeye.training] Epoch[1] Batch [6150] Speed: 122.79 samples/sec 6308.76 tokens/sec 1.19 updates/sec perplexity=15.737472

I've highlighted this change in log, and I wonder why this change happened.

fhieber commented 5 years ago

Hi, thank you for your question. Please see #620 for an explanation.

zhangwb16 commented 5 years ago

Thank you for your explanation.
I mistakenly think that this ppl is the average of 50 batches.