facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.37k stars 6.4k forks source link

Transormer multilingual model not training #503

Closed shlokk closed 5 years ago

shlokk commented 5 years ago

I'm trying to load fairseq Transformer multilingual model. When I'm giving the lang-pairs as en-de and en-de then the model starts training and when I'm giving the model lang pairs as en-de sr-de it gets stuck after saying there is no checkpoint found. I'm attaching the stack trace of both of the models. Can you please take a look and let me know. I'm able to train models with even more parameters than what is given in the model en-de, sr-de. This looks like some other issue that I'm not able to correct. I'm attaching the logs for both of the models.

Logs for model en-de sr-de

fairseq-train processed_multilingual_less_vocab -a multilingual_transformer_iwslt_de_en --optimizer adam --lr 0.0005 --label-smoothing 0.1 --dropout 0.3 --max-tokens 1100 --min-lr '1e-09' --lr-scheduler inverse_sqrt --weight-decay 0.0001 --criterion label_smoothed_cross_entropy --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' --adam-betas '(0.9, 0.98)' --save-dir checkpoints/transformer_multilingual --task multilingual_translation --lang-pairs en-de,sr-de --max-sentences 8 --batch-size 8

Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=0, distributed_backend='nccl', distributed_init_method='tcp://localhost:11809', distributed_port=-1, distributed_rank=0, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,sr-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=3, distributed_backend='nccl', distributed_init_method='tcp://localhost:11809', distributed_port=-1, distributed_rank=3, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,sr-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=2, distributed_backend='nccl', distributed_init_method='tcp://localhost:11809', distributed_port=-1, distributed_rank=2, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,sr-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=1, distributed_backend='nccl', distributed_init_method='tcp://localhost:11809', distributed_port=-1, distributed_rank=1, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,sr-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) | [sr] dictionary: 10000 types | [en] dictionary: 10000 types | [en] dictionary: 10000 types | [en] dictionary: 10000 types | [de] dictionary: 10000 types | [sr] dictionary: 10000 types | [de] dictionary: 10000 types | [de] dictionary: 10000 types | [de] dictionary: 10000 types | [en] dictionary: 10000 types | [sr] dictionary: 10000 types | processed_multilingual_less_vocab train 188660 examples | [sr] dictionary: 10000 types | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab valid 20630 examples | processed_multilingual_less_vocab valid 20630 examples | processed_multilingual_less_vocab valid 20630 examples | distributed init (rank 0): tcp://localhost:11809 | processed_multilingual_less_vocab valid 20630 examples | processed_multilingual_less_vocab valid 20630 examples | processed_multilingual_less_vocab valid 20630 examples | processed_multilingual_less_vocab valid 20630 examples | processed_multilingual_less_vocab valid 20630 examples | distributed init (rank 3): tcp://localhost:11809 | distributed init (rank 1): tcp://localhost:11809 | distributed init (rank 2): tcp://localhost:11809 | initialized host vulcan03.umiacs.umd.edu as rank 0 MultilingualTransformerModel( (models): ModuleDict( (en-de): FairseqModel( (encoder): TransformerEncoder( (embed_tokens): Embedding(10000, 512, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (1): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (2): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (3): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (4): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (5): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) ) ) (decoder): TransformerDecoder( (embed_tokens): Embedding(10000, 512, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (1): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (2): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (3): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (4): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (5): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) ) ) (sr-de): FairseqModel( (encoder): TransformerEncoder( (embed_tokens): Embedding(10000, 512, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (1): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (2): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (3): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (4): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (5): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) ) ) (decoder): TransformerDecoder( (embed_tokens): Embedding(10000, 512, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (1): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (2): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (3): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (4): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (5): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) ) ) ) ) | model multilingual_transformer_iwslt_de_en, criterion LabelSmoothedCrossEntropyCriterion | num. model params: 64640000 (num. trained: 64640000) | training on 4 GPUs | max tokens per GPU = 1100 and max sentences per GPU = 8 | WARNING: 141293 samples have invalid sizes and will be skipped, max_positions={'en-de': (1024, 1024), 'sr-de': (1024, 1024)}, first few sample ids=[47367, 47368, 47369, 47370, 47371, 47372, 47373, 47374, 47375, 47376] | no existing checkpoint found checkpoints/transformer_multilingual/checkpoint_last.pt

Logs for model en-de en-de fairseq-train processed_multilingual_less_vocab -a multilingual_transformer_iwslt_de_en --optimizer adam --lr 0.0005 --label-smoothing 0.1 --dropout 0.3 --max-tokens 1100 --min-lr '1e-09' --lr-scheduler inverse_sqrt --weight-decay 0.0001 --criterion label_smoothed_cross_entropy --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' --adam-betas '(0.9, 0.98)' --save-dir checkpoints/transformer_multilingual --task multilingual_translation --lang-pairs en-de,en-de --max-sentences 8 --batch-size 8

Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=1, distributed_backend='nccl', distributed_init_method='tcp://localhost:16097', distributed_port=-1, distributed_rank=1, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,en-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=0, distributed_backend='nccl', distributed_init_method='tcp://localhost:16097', distributed_port=-1, distributed_rank=0, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,en-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=2, distributed_backend='nccl', distributed_init_method='tcp://localhost:16097', distributed_port=-1, distributed_rank=2, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,en-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='multilingual_transformer_iwslt_de_en', attention_dropout=0.0, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy', data='processed_multilingual_less_vocab', ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=3, distributed_backend='nccl', distributed_init_method='tcp://localhost:16097', distributed_port=-1, distributed_rank=3, distributed_world_size=4, dropout=0.3, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_pairs='en-de,en-de', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0005], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=8, max_sentences_valid=8, max_source_positions=1024, max_target_positions=1024, max_tokens=1100, max_update=50000, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, num_workers=0, optimizer='adam', optimizer_overrides='{}', raw_text=False, relu_dropout=0.0, reset_lr_scheduler=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/transformer_multilingual', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=False, share_decoder_input_output_embed=False, share_decoders=False, share_encoder_embeddings=False, share_encoders=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation', train_subset='train', update_freq=[1], user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0001) | [en] dictionary: 10000 types | [de] dictionary: 10000 types | processed_multilingual_less_vocab train 188660 examples | [de] dictionary: 10000 types | [de] dictionary: 10000 types | [en] dictionary: 10000 types | [en] dictionary: 10000 types | [en] dictionary: 10000 types | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab train 188660 examples | [de] dictionary: 10000 types | processed_multilingual_less_vocab valid 20630 examples | distributed init (rank 1): tcp://localhost:16097 | processed_multilingual_less_vocab train 188660 examples | processed_multilingual_less_vocab valid 20630 examples | processed_multilingual_less_vocab valid 20630 examples | distributed init (rank 2): tcp://localhost:16097 | distributed init (rank 0): tcp://localhost:16097 | processed_multilingual_less_vocab valid 20630 examples | distributed init (rank 3): tcp://localhost:16097 | initialized host vulcan03.umiacs.umd.edu as rank 0 MultilingualTransformerModel( (models): ModuleDict( (en-de): FairseqModel( (encoder): TransformerEncoder( (embed_tokens): Embedding(10000, 512, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (1): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (2): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (3): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (4): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) (5): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (layer_norms): ModuleList( (0): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) ) ) (decoder): TransformerDecoder( (embed_tokens): Embedding(10000, 512, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (1): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (2): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (3): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (4): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) (5): TransformerDecoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (final_layer_norm): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True) ) ) ) ) ) ) | model multilingual_transformer_iwslt_de_en, criterion LabelSmoothedCrossEntropyCriterion | num. model params: 46903296 (num. trained: 46903296)

training on 4 GPUs | max tokens per GPU = 1100 and max sentences per GPU = 8 | WARNING: 141293 samples have invalid sizes and will be skipped, max_positions={'en-de': (1024, 1024)}, first few sample ids=[47367, 47368, 47369, 47370, 47371, 47372, 47373, 47374, 47375, 47376] | no existing checkpoint found checkpoints/transformer_multilingual/checkpoint_last.pt

epoch 001: 0%| | 1/10127 [00:01<3:22:24, 1.20s/it, loss=14.058, nll_loss=14.063, ppl=17118.73, wps=3, ups=0.0, wpb=94, bsz=6, num_updates=1, lr=2.24975e-07, gnorm=9.477, clip=0%, oom=0, wall=32, trai| epoch 001: 0%| | 1/10127 [00:01<3:29:03, 1.24s/it, loss=14.058, nll_loss=14.063, ppl=17118.73, wps=4, ups=0.0, wpb=94, bsz=6, num_updates=1, lr=2.24975e-07, gnorm=9.477, clip=0%, oom=0, wall=25, trai| epoch 001: 0%| | 1/10127 [00:01<3:33:32, 1.27s/it, loss=14.058, nll_loss=14.063, ppl=17118.73, wps=4, ups=0.0, wpb=94, bsz=6, num_updates=1, lr=2.24975e-07, gnorm=9.477, clip=0%, oom=0, wall=25, trai| epoch 001: 0%| | 1/10127 [00:01<3:33:29, 1.27s/it, loss=14.058, nll_loss=14.063, ppl=17118.73, wps=4, ups=0.0, wpb=94, bsz=6, num_updates=1, lr=2.24975e-07, gnorm=9.477, clip=0%, oom=0, wall=25, trai| epoch 001: 0%| | 2/10127 [00:02<3:21:57, 1.20s/it, loss=13.988, nll_loss=13.986, ppl=16222.38, wps=34, ups=0.1, wpb=68, bsz=5, num_updates=2, lr=3.4995e-07, gnorm=10.996, clip=0%, oom=0, wall=33, tra| epoch 001: 0%| | 2/10127 [00:02<3:25:07, 1.22s/it, loss=13.988, nll_loss=13.986, ppl=16222.38, wps=35, ups=0.1, wpb=68, bsz=5, num_updates=2, lr=3.4995e-07, gnorm=10.996, clip=0%, oom=0, wall=26, tra| epoch 001: 0%| | 2/10127 [00:02<3:27:13, 1.23s/it, loss=13.988, nll_loss=13.986, ppl=16222.38, wps=36, ups=0.1, wpb=68, bsz=5, num_updates=2, lr=3.4995e-07, gnorm=10.996, clip=0%, oom=0, wall=26, tra| epoch 001: 0%| | 2/10127 [00:02<3:29:12, 1.24s/it, loss=13.988, nll_loss=13.986, ppl=16222.38, wps=35, ups=0.1, wpb=68, bsz=5, num_updates=2, lr=3.4995e-07, gnorm=10.996, clip=0%, oom=0, wall=26, tra| epoch 001: 0%| | 3/10127 [00:03<2:55:35, 1.04s/it, loss=13.963, nll_loss=13.958, ppl=15910.10, wps=62, ups=0.1, wpb=68, bsz=5, num_updates=3, lr=4.74925e-07, gnorm=10.701, clip=0%, oom=0, wall=27, tr| epoch 001: 0%| | 3/10127 [00:03<3:00:50, 1.07s/it, loss=13.963, nll_loss=13.958, ppl=15910.10, wps=60, ups=0.1, wpb=68, bsz=5, num_updates=3, lr=4.74925e-07, gnorm=10.701, clip=0%, oom=0, wall=27, tr| epoch 001: 0%| | 3/10127 [00:03<2:56:47, 1.05s/it, loss=13.963, nll_loss=13.958, ppl=15910.10, wps=59, ups=0.1, wpb=68, bsz=5, num_updates=3, lr=4.74925e-07, gnorm=10.701, clip=0%, oom=0, wall=33, tr| epoch 001: 0%| | 3/10127 [00:03<3:01:39, 1.08s/it, loss=13.963, nll_loss=13.958, ppl=15910.10, wps=59, ups=0.1, wpb=68, bsz=5, num_updates=3, lr=4.74925e-07, gnorm=10.701, clip=0%, oom=0, wall=27, tr| epoch 001: 0%| | 4/10127 [00:03<2:37:40, 1.07it/s, loss=13.993, nll_loss=13.991, ppl=16280.52, wps=68, ups=0.1, wpb=65, bsz=4, num_updates=4, lr=5.999e-07, gnorm=10.745, clip=0%, oom=0, wall=28, trai| epoch 001: 0%| | 4/10127 [00:03<2:38:29, 1.06it/s, loss=13.993, nll_loss=13.991, ppl=16280.52, wps=65, ups=0.1, wpb=65, bsz=4, num_updates=4, lr=5.999e-07, gnorm=10.745, clip=0%, oom=0, wall=34, trai| epoch 001: 0%| | 4/10127 [00:03<2:41:12, 1.05it/s, loss=13.993, nll_loss=13.991, ppl=16280.52, wps=66, ups=0.1, wpb=65, bsz=4, num_updates=4, lr=5.999e-07, gnorm=10.745, clip=0%, oom=0, wall=28, trai| epoch 001: 0%| | 4/10127 [00:03<2:42:59, 1.04it/s, loss=13.993, nll_loss=13.991, ppl=16280.52, wps=65, ups=0.1, wpb=65, bsz=4, num_updates=4, lr=5.999e-07, gnorm=10.745, clip=0%, oom=0, wall=28, trai| epoch 001: 0%| | 5/10127 [00:04<2:39:35, 1.06it/s, loss=14.000, nll_loss=13.999, ppl=16374.30, wps=76, ups=0.1, wpb=72, bsz=5, num_updates=5, lr=7.24875e-07, gnorm=10.333, clip=0%, oom=0, wall=35, tr| epoch 001: 0%| | 5/10127 [00:04<2:40:20, 1.05it/s, loss=14.000, nll_loss=13.999, ppl=16374.30, wps=77, ups=0.2, wpb=72, bsz=5, num_updates=5, lr=7.24875e-07, gnorm=10.333, clip=0%, oom=0, wall=29, tr| epoch 001: 0%| | 5/10127 [00:04<2:41:19, 1.05it/s, loss=14.000, nll_loss=13.999, ppl=16374.30, wps=76, ups=0.2, wpb=72, bsz=5, num_updates=5, lr=7.24875e-07, gnorm=10.333, clip=0%, oom=0, wall=29, tr| epoch 001: 0%| | 5/10127 [00:04<2:43:12, 1.03it/s, loss=14.000, nll_loss=13.999, ppl=16374.30, wps=76, ups=0.2, wpb=72, bsz=5, num_updates=5, lr=7.24875e-07, gnorm=10.333, clip=0%, oom=0, wall=29, tr| epoch 001: 0%| | 6/10127 [00:06<3:01:06, 1.07s/it, loss=13.974, nll_loss=13.970, ppl=16046.36, wps=68, ups=0.2, wpb=71, bsz=4, num_updates=6, lr=8.4985e-07, gnorm=10.340, clip=0%, oom=0, wall=30, tra| epoch 001: 0%| | 6/10127 [00:06<3:02:06, 1.08s/it, loss=13.974, nll_loss=13.970, ppl=16046.36, wps=68, ups=0.2, wpb=71, bsz=4, num_updates=6, lr=8.4985e-07, gnorm=10.340, clip=0%, oom=0, wall=30, tra| epoch 001: 0%| | 6/10127 [00:06<3:02:52, 1.08s/it, loss=13.974, nll_loss=13.970, ppl=16046.36, wps=68, ups=0.2, wpb=71, bsz=4, num_updates=6, lr=8.4985e-07, gnorm=10.340, clip=0%, oom=0, wall=30, tra| epoch 001: 0%| | 6/10127 [00:06<3:02:53, 1.08s/it, loss=13.974, nll_loss=13.970, ppl=16046.36, wps=67, ups=0.2, wpb=71, bsz=4, num_updates=6, lr=8.4985e-07, gnorm=10.340, clip=0%, oom=0, wall=36, tra| epoch 001: 0%| | 7/10127 [00:07<2:56:50, 1.05s/it, loss=14.026, nll_loss=14.028, ppl=16706.07, wps=70, ups=0.2, wpb=72, bsz=4, num_updates=7, lr=9.74825e-07, gnorm=10.166, clip=0%, oom=0, wall=31, tr| epoch 001: 0%| | 7/10127 [00:07<2:57:35, 1.05s/it, loss=14.026, nll_loss=14.028, ppl=16706.07, wps=69, ups=0.2, wpb=72, bsz=4, num_updates=7, lr=9.74825e-07, gnorm=10.166, clip=0%, oom=0, wall=37, tr| epoch 001

huihuifan commented 5 years ago

@pipibjc

pipibjc commented 5 years ago

@shlokk, your command looks fine to me, and I don't see anything suspicious in your log. Could you force quit (Ctrl+C) when the job get stuck, and paste the stack trace printed out here. That could help to identify where it gets stuck.

shlokk commented 5 years ago

tensor([ 4.3453, 1.1720, 8.9815, 4.9725, 0.6070, -0.3106, 6.9086, 8.1240, 3.9265, 3.2511, 0.9057, 5.5263, 0.2724, 5.7467, -0.9876, 1.4142, 3.8306, 0.2412, 7.6684, 7.3189, 2.8874, 1.1755, 2.1802, 3.6119, 1.6324, 27.9216, 5.1646, 1.6131, 2.5047, 1.7119, 3.4961, 0.9967, 10.3398, 0.1681, 4.0894, 1.4793, -1.0866, 18.1717, 3.7706, 5.4537, 5.9482, 3.9754, 2.4888, 5.7756, 1.8019, 2.2830, 1.6371, 2.2718, 1.0111, 2.9040, 0.0856, 1.7615, -0.2971, 10.1678, 2.7846, 0.6215, 2.0075, 5.3037, 1.9355, 6.5560, 17.8473, 3.8322, 5.4303, 1.8312, 6.1920, 1.9598, 10.9221, 3.6915, 6.9032, 3.2445, 1.6602, 1.4901, 1.0809, 3.0040, 17.0614, 3.4554, 1.6736, 4.4754, 3.5412, 2.2610, 3.6306, 2.0959, 0.7643, 6.4640, 2.8924, 2.0818, 1.8186, 2.0648, 3.5025, 21.9482, 3.7188, 6.1501, 2.2348, 6.2233, 0.8630, 1.7262, 12.5845, 0.2595, 3.1899, 3.5622, 6.7078, 6.7598, 4.1910, -1.1728, 17.1290, 5.8050, 2.8176, 0.8338, 1.6335, 4.9157, 6.3426, 2.9890, 2.1152, 1.9318, 0.6844, -1.1745, 3.5695, 3.6512, 2.3199, 1.1356, 8.9122, 2.1368, 2.1253, 1.7993, 1.6929, 7.0466, 1.0482, 1.7578, 1.7772, 4.6419, 2.4504, 5.1393, 5.0390, 3.9144, 3.9400, -0.6218, -0.0436, 1.7120, 5.6659, 1.4650, 1.3255, 3.1954, 5.5117, 1.2881, 3.7722, 5.8185, 4.3274, -0.9269, 4.1475, 5.1455, 3.8611, 3.6738, 3.5818, 4.5904, 1.5408, 3.1364, 8.5560, 4.5593, 4.0674, 5.5539, 3.6648, 4.6324, 6.3984, 4.1013, 4.3497, 4.4700, 6.6081, 1.0441, 1.4473, 2.1179, 2.8136, 9.4052, 2.2069, 2.1823, 4.8722, 2.1530, 4.6540, 0.8345, 6.5055, 1.7278, 16.0073, 1.0563, 5.0940, 4.0461, 10.1433, 7.6652, 2.1463, 5.5633, 1.8435, 3.9550, 3.5991, 2.6521, 6.8037, 7.5574, 5.1567, 2.5760, 6.0062, 0.6982, 3.3100, 3.1540, 3.9312, 15.3132, 2.1011, 4.0518, 2.9721, 2.0463, 6.6479, 5.0960, 2.5081, 8.9618, 4.8198, 6.7531, 9.5451, -1.1841, 1.2528, 1.6906, 1.7874, 4.6697, 6.2596, 3.4920, 1.4822, 4.7258, 5.4828, 7.5659, 0.7160, 2.6546, 3.7796, 1.2868, 4.6045, 0.8793, 2.9995, 0.6780, 2.5599, 3.1087, 1.5387, 2.3471, 3.7356, 12.4435, 4.5847, 6.1265, 5.9008, 11.0742, 2.7254, 1.5971, 8.9440, 1.1394, 11.7267, 0.6200, 2.7240, 3.0799, 5.4577, 4.4653, 3.3733, 2.3584, 2.5558, 2.4522, 6.5859, 1.6228, 5.5442, 5.2688, 1.7896, 4.6593, 8.9881, 2.9454, 5.8600, 4.7495, 3.4897, 4.9629, 1.0916, 4.7401, 1.1085, 1.9529, 3.1836, 1.8361, 1.0057, 8.0075, 2.0053, 0.0769, 2.6125, 2.3094, 4.3264, 8.4925, 1.2494, 7.8674, 2.6766, 5.3703, 4.5305, 1.0493, 5.0095, 3.9031, 7.5480, 4.9897, 0.1455, 5.0462, 2.3373, 2.2809, 1.2384, 3.5597, 2.8371, 6.3736, 6.3946, 2.7057, 5.5812, 3.6580, 8.7704, 3.0031, 2.8637, 2.2443, 7.7057, 3.7699, 1.5119, 6.0671, 6.2462, 7.4462, 0.9332, 4.1173, 10.2666, 3.2978, 10.2072, 1.0364, 12.0108, 3.3291, 3.4362, 1.0812, 1.8406, 5.0881, 1.5028, 0.8874, 10.0668, -0.5129, 3.1516, 0.8478, 3.9026, 4.2914, 8.7153, 2.0765, 11.7329, -0.3975, 12.4368, 5.4943, 4.5249, 0.8246, 0.6945, 0.9501, 1.6087, 1.6630, 18.2234, 12.4876, 2.7859, 1.3147, 4.6431, 0.7222, 2.9445, 3.5140, 1.4666, 4.7066, 4.5168, 3.9483, 4.0176, 2.9221, 3.4401, 5.0931, 2.5519, 3.8703, 4.0022, 0.9353, 3.5727, 5.8642, 4.0581, -0.3299, 5.1781, 3.4286, 6.4830, 10.3929, 6.0505, 3.0916, 3.7418, 0.4324, 2.9171, 5.0095, 2.6670, 3.0063, 10.7655, 2.7410, 3.6515, 3.2573, 3.7165, 2.7494, -1.0204, 0.7325, 10.5251, 2.4760, 3.3531, 2.9585, 7.2706, 1.1253, 5.6881, 1.0477, 8.4388, 1.0924, 2.3008, 2.4189, 1.7210, -0.9696, 5.8982, 3.0647, 4.5277, 0.2595, 7.0263, 3.2814, 2.2645, 1.9063, 4.4125, 6.1753, 1.4627, 4.0200, 1.9900, 1.1066, 5.2980, 5.2189, 10.1817, 2.6030, 0.4635, 8.5176, 7.4037, 7.8829, 9.7216, 2.0138, 3.8733, 3.3066, 0.9939, 5.7003, 1.5180, 1.6809, 0.0653, -0.3435, 0.0802, 7.1400, 1.2751, 4.4403, 3.9074, 3.5472, 4.1010, 4.7267, 3.5950, -0.8360, 6.1049, 3.8084, 3.0968, 7.9435, 7.0843, 9.0081, 1.5902, 0.7689, 1.4011, 3.1487, 0.6403, 5.6865, 1.6623, 8.9772, 2.2181, 3.9682, 6.6367, 8.1830, 1.1652, 3.0598, 7.0835, 1.2170, 4.5089, 12.7236, 2.3283, 0.6709, 7.3527, 1.9382, 2.5403, 2.3141, 4.3975, 3.1646, 4.2567, 2.1188, -1.2257, 6.2107, 4.6301, 5.8379, 0.0821, 3.0302, 3.5639, 8.2972, 12.0513, 2.7481, 6.6306, 1.9756, 3.8462, 4.4005, 1.0817, 0.4348, 2.2975, 3.7565, 4.4067, 4.1738, 2.6422, 3.0344, 3.4206, 0.7792, 6.6201, 3.9790, 5.6992, 7.8828, 2.4885, 4.3668, 4.0588, 5.3218], device='cuda:0'), tensor([ 4.0743e+00, 1.8615e+00, 5.2149e+00, 3.3122e+00, -1.6637e+00, 2.7496e+00, -4.9191e+00, 1.1868e+00, -3.2443e+00, -1.0476e+00, 1.9953e+00, 2.4062e+00, 5.5649e-01, -4.2203e+00, 7.1398e-01, -3.3996e-01, -1.2529e+00, 1.1736e+00, -4.6310e+00, -6.8142e+00, -8.1678e-02, -1.8375e+00, -2.8085e+00, 3.8759e+00, 5.4539e-01, -1.1311e+01, -2.6378e-01, -2.0708e+00, 1.0322e+00, -2.0345e+00, 2.6432e+00, -2.4048e+00, 5.6498e+00, -2.5628e+00, 2.6833e+00, 2.5280e-01, -2.9238e+00, 9.1101e+00, -7.5634e-01, 2.8413e+00, -1.8604e+00, -4.1953e+00, 1.7557e+00, -7.8378e-01, -8.0860e-03, -1.5627e+00, -2.4375e+00, -2.2491e+00, -2.7646e+00, 2.6678e+00, 3.0431e+00, -9.9144e-01, 2.2826e+00, 4.1571e+00, -8.3149e-02, 1.1365e+00, -2.2260e+00, -4.0722e+00, 1.7883e+00, 4.7293e+00, -1.0388e+01, -5.4749e-02, -2.9262e+00, -1.3786e+00, 4.8311e+00, 7.0436e-01, 7.4358e+00, -3.4944e+00, 3.7344e+00, -4.4047e+00, 7.7637e-01, -1.3725e+00, -1.7625e+00, 2.7173e+00, -9.7837e+00, 1.7779e+00, -2.8285e-01, 1.4243e+00, -4.5597e+00, -7.9718e-01, -2.0529e+00, 2.3785e+00, 1.4280e+00, -2.9547e+00, -2.8831e+00, -1.3627e+00, 1.7329e+00, -1.9952e+00, 3.1081e+00, -1.0122e+01, 2.6378e+00, -4.6662e+00, -3.9685e-02, 5.4446e+00, 1.7276e+00, -2.9241e-01, 9.7458e+00, 6.9819e-01, 1.0569e-01, -1.9324e+00, 2.7894e+00, 6.1676e+00, -7.5980e-01, 7.2053e-01, -6.4098e+00, 2.8398e+00, -1.2740e+00, 9.7570e-01, -9.2284e-01, -1.9963e+00, -4.2230e+00, 3.6071e+00, -1.0318e+00, 5.8105e-01, -3.7204e-01, 1.3109e+00, -1.5319e+00, -1.3551e+00, -1.0925e+00, -7.1517e-01, 6.4385e+00, -2.0855e-02, 7.2659e-01, 9.7887e-01, 2.6262e+00, -1.8127e+00, -5.4513e-01, -4.4245e+00, -1.0864e+00, 1.3662e+00, -4.1475e+00, 5.6452e+00, -2.5868e+00, -2.5879e+00, 2.3381e+00, -2.9810e-01, 2.9430e-01, -6.4819e-01, -5.0923e+00, 7.2989e-01, 1.9091e+00, -2.4825e+00, 3.2708e+00, 1.3801e+00, -2.4229e+00, 4.4229e+00, -3.5554e+00, -3.8043e+00, -1.9133e+00, 4.1561e+00, -3.0973e+00, -7.7843e-01, 4.8299e+00, -5.1397e+00, 6.8754e-01, -3.3284e+00, -5.3677e+00, -1.4970e+00, 3.1487e+00, 5.1951e+00, 2.3815e+00, -2.1686e+00, -5.6989e+00, 3.3277e+00, -2.6674e+00, -1.3527e+00, -3.5078e+00, 2.3529e+00, 5.7437e+00, -1.9877e-01, 1.6535e+00, -5.9450e+00, -3.2092e+00, 1.5920e+00, -2.6765e+00, -1.8989e+00, 3.1111e+00, -1.1752e+00, 2.1326e+00, -1.3735e+00, -8.1840e+00, -2.8330e+00, -5.0973e+00, 1.6546e+00, 5.9490e+00, 5.0716e+00, 2.9992e+00, 4.2686e+00, 9.1663e-01, 1.9435e+00, -1.7257e+00, -1.6338e+00, 4.8549e+00, 4.0598e+00, -1.7000e+00, -2.4713e+00, -2.3748e+00, -2.0052e+00, 1.6700e+00, 1.2766e-01, 9.1150e-01, 8.2796e+00, 2.9935e+00, 3.5026e+00, -1.7854e+00, -2.5406e+00, 5.3075e+00, -9.6173e-01, -1.0672e+00, 6.6122e+00, -5.6903e+00, -5.1190e+00, -4.8493e+00, 9.9653e-01, -1.9545e+00, 6.1879e-01, -1.3232e+00, -4.0917e+00, -3.5524e+00, 1.4705e+00, -1.9601e-01, 1.8296e+00, -1.1151e+00, 4.9404e+00, 1.5014e+00, 1.4403e+00, -5.4897e+00, 7.7721e-02, -2.8193e+00, -1.1731e+00, 1.4806e+00, -9.8042e-01, -2.8013e+00, -2.0725e+00, -1.5185e+00, -4.3402e+00, -1.0209e+00, 6.7040e+00, 1.8312e+00, -4.4742e+00, -5.4954e+00, -8.1443e+00, 8.9182e-01, -1.8756e+00, 6.4857e+00, -2.5982e-01, 5.2677e+00, 5.9383e-01, -5.6804e-01, -2.7872e+00, -2.2692e+00, 1.1604e+00, -3.5184e+00, -3.8134e+00, -4.1029e-01, 4.1415e-01, -3.8305e+00, 2.0516e-01, -3.0724e+00, 4.4146e-01, -4.3688e+00, 5.3306e+00, -5.6164e+00, -1.0278e+00, 6.2834e+00, 4.1652e+00, -2.9967e+00, -4.2873e+00, -2.4980e+00, -3.5375e+00, -1.1949e+00, -2.6779e+00, 4.1814e+00, 9.4819e-02, 9.7410e-02, 5.7335e+00, 1.6068e+00, 4.9126e-01, -2.1332e+00, 2.3145e+00, -2.8452e+00, -4.7006e+00, -1.5187e+00, -5.1468e+00, -5.8551e-01, 3.9637e+00, -2.9863e+00, 7.9684e-01, 3.5669e+00, 3.5169e+00, -2.7486e+00, 3.4246e+00, 2.6075e+00, 4.2145e+00, -1.6354e+00, -1.5532e+00, -2.5037e+00, -1.6026e+00, 6.0126e-01, -4.8117e+00, -2.9557e+00, -4.8691e+00, -3.3886e+00, -4.6290e-01, 6.3911e+00, 1.7410e+00, 7.7633e-01, -1.7809e+00, 2.6023e+00, -5.2771e-01, -1.0120e+00, 4.3780e+00, 4.6864e+00, -4.9954e+00, -2.0653e-01, 2.3111e+00, 5.4311e+00, -4.3613e-01, -7.3709e+00, -1.6021e+00, 1.2111e-01, 1.2254e+00, 1.8659e+00, -1.3884e+00, -2.3363e+00, 5.6801e+00, -7.6327e-01, -2.1729e+00, 7.7515e+00, -1.6514e+00, 6.1853e-01, -1.1373e-01, 9.4395e-01, 2.1401e+00, 3.8992e+00, 1.5415e-01, 6.9741e-01, 3.9101e+00, -8.4565e+00, 2.7767e+00, -2.4635e+00, -1.5955e+00, -5.8583e-01, 2.8926e-01, 1.8788e+00, 2.0727e+00, -7.3585e+00, -6.2657e+00, 1.1615e+00, 1.0902e+00, -6.2480e+00, 2.8968e+00, -1.1351e+00, 6.0050e+00, 3.3696e-01, -1.2962e+00, 1.2859e+00, -1.8807e+00, -6.6616e-01, 1.8083e+00, -2.6136e-01, -1.0422e-01, -1.5391e+00, 2.9830e+00, -4.8785e-01, 1.8655e+00, -3.2814e+00, 2.4031e+00, -1.0483e+00, 4.3945e-01, 4.4337e+00, -4.2648e+00, -4.0548e+00, 5.3060e+00, 3.2951e+00, 2.4646e+00, -7.7040e-01, -6.7783e-01, -2.6214e+00, 2.8780e+00, -2.7520e+00, 1.6728e+00, -6.2227e+00, 5.2121e+00, -1.6508e+00, 1.7600e+00, 3.3618e+00, -1.5720e-01, -1.0975e+00, 1.1519e+00, 4.8061e+00, -5.0111e-01, -2.7071e+00, -1.1438e+00, -3.5824e+00, -4.7092e-01, 2.7776e+00, -1.2611e+00, -6.8171e+00, 4.9644e-01, -5.5029e-01, 1.5575e+00, -1.7091e+00, -1.2326e+00, 4.1987e+00, -1.0277e+00, 5.6389e+00, 2.0424e-01, -5.3433e+00, -1.6083e+00, -3.5305e+00, -9.3996e-01, 6.1200e-01, 2.2121e+00, 2.0658e+00, 8.8706e-01, -2.5484e-01, -7.8590e-01, 3.9883e+00, -5.6852e+00, -3.9310e+00, 1.4759e+00, 9.8150e-01, -2.5356e+00, 3.8091e+00, 4.6890e+00, 6.6017e+00, 1.9679e+00, 1.6213e+00, 2.4673e+00, -1.7457e-01, 3.2486e+00, 6.8898e-01, 2.6962e+00, 1.3520e+00, 6.5215e-02, -7.5856e-01, 4.8430e+00, 3.2600e-01, 1.1453e+00, -1.0941e+00, -3.0625e+00, 1.8824e+00, -1.8730e+00, 2.6027e+00, 2.7957e+00, -4.4861e+00, 1.5539e+00, -3.9462e+00, 7.3987e+00, -3.9698e+00, 4.9402e+00, -1.5426e+00, -1.1663e+00, 3.7639e+00, -1.6161e+00, -5.8879e-01, -2.4090e+00, 1.7084e+00, 2.3133e+00, 3.0579e+00, 2.4035e+00, -4.3467e+00, 2.1326e+00, 3.1517e+00, 4.8422e-01, -1.1965e+00, -1.2972e+00, -3.5889e+00, -6.7090e+00, 3.3409e+00, 2.4558e+00, 2.9872e+00, 1.4313e+00, -4.8242e-01, -2.3450e-03, -1.6089e+00, -8.3267e-01, 3.0414e+00, -5.1630e-01, 1.5039e+00, 3.4257e+00, -1.9537e+00, 3.7709e+00, 2.2039e+00, 5.4206e-02, -1.7006e-01, -3.7644e+00, 6.4189e+00, -5.4453e-01, -5.1022e+00, -5.1228e-01, 3.1632e+00, -2.1270e+00, 8.6830e-01, -3.4373e-01, 1.4352e-01, 3.9379e+00, -2.4519e-01, 2.6755e+00, 3.7754e-03, -1.2652e+00, 6.7158e-01, 2.1504e+00, 2.5058e+00, -3.0099e+00, -4.1403e+00, 5.7044e+00, -8.8455e-02, 1.8230e+00, 1.9049e+00, 3.1846e+00], device='cuda:0'), None]], [0]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 25, in _wrap error_queue.put(traceback.format_exc()) File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/multiprocessing/queues.py", line 347, in put self._writer.send_bytes(obj) File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/multiprocessing/connection.py", line 398, in _send_bytes self._send(buf) File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) KeyboardInterrupt Traceback (most recent call last): File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/bin/fairseq-train", line 11, in load_entry_point('fairseq', 'console_scripts', 'fairseq-train')() File "/vulcan/scratch/shlok/fairseq/fairseq_cli/train.py", line 403, in cli_main nprocs=args.distributed_world_size, File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 167, in spawn while not spawn_context.join(): File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 73, in join timeout=timeout, File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/multiprocessing/connection.py", line 911, in wait ready = selector.select(timeout) File "/vulcan/scratch/shlok/Ana/envs/cmsc723project/lib/python3.6/selectors.py", line 376, in select fd_event_list = self._poll.poll(timeout) KeyboardInterrupt

Process SpawnProcess-4:

pipibjc commented 5 years ago

Thanks. Although it's harder to see what's happening under multi-GPU setup. Could you try again to report the stack trace on one GPU only? E.g. inserting CUDA_VISIBLE_DEVICE in front of your command, like CUDA_VISIBLE_DEVICES=0 fairseq-train ...(your other parameters)...

shlokk commented 5 years ago

Hi, When I changed it to a single GPU setup, things are starting to train now.

myleott commented 5 years ago

There was a bug with multi-gpu training of multilingual models when using --ddp-backend=no_c10d. I'm not sure if this affected you, but this has been fixed in https://github.com/pytorch/fairseq/pull/527.