facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.49k stars 6.41k forks source link

Covid-19 Large Scale Megatron Training #2503

Closed agemagician closed 3 years ago

agemagician commented 4 years ago

🐛 Bug

We are training different language models for protein sequences, which is part of the effort to fighting Covid-19. We already have published several pertained models trained on Summit (6k GPUs )and TPU Pods (V3-1024 and V3-512), and we are interested on training Megatron: https://github.com/agemagician/ProtTrans

We are testing Megatron training on Colab TPUs, but it fails. However, Roberta works fine.

To Reproduce

Works fine:

!fairseq-preprocess \
    --only-source \
    --trainpref train_data_pro.txt \
    --validpref train_data_pro.txt \
    --testpref train_data_pro.txt \
    --destdir dataset/uniref50 \
    --dataset-impl lazy \
    --workers 2

Roberta Works fine:

TOTAL_UPDATES=125000    # Total number of training steps
WARMUP_UPDATES=10000    # Warmup the learning rate over this many updates
PEAK_LR=0.0005          # Peak learning rate, adjust as needed
TOKENS_PER_SAMPLE=512   # Max sequence length
MAX_POSITIONS=512       # Num. positional embeddings (usually same as above)
MAX_SENTENCES=1         # Number of sequences per batch (batch size)
UPDATE_FREQ=16          # Increase the batch size 16x

DATA_DIR="dataset/uniref50"

!fairseq-train --tpu $DATA_DIR \
    --task masked_lm --criterion masked_lm \
    --arch roberta_base --sample-break-mode complete --tokens-per-sample $TOKENS_PER_SAMPLE \
    --optimizer adam --adam-betas '(0.9,0.98)' --adam-eps 1e-6 --clip-norm 0.0 \
    --lr-scheduler polynomial_decay --lr $PEAK_LR --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_UPDATES \
    --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \
    --max-sentences $MAX_SENTENCES --update-freq $UPDATE_FREQ \
    --max-update $TOTAL_UPDATES --log-format simple --log-interval 1 \
    --distributed-world-size=8

Megatron fails:

DATA_DIR="dataset/uniref50"

!fairseq-train --tpu $DATA_DIR \
  --distributed-world-size 8  \
  --num-workers 2 \
  --model-parallel-size 8 \
  --criterion vocab_parallel_cross_entropy \
  --task language_modeling \
  --sample-break-mode none \
  --tokens-per-sample 1024 \
  --arch transformer_lm_megatron_11b \
  --share-decoder-input-output-embed \
  --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-08 --clip-norm 0.0 \
  --lr-scheduler inverse_sqrt --lr 0.00015 \
  --warmup-updates 3000 --weight-decay 0.01 \
  --dropout 0.1 --attention-dropout 0.1 \
  --max-sentences 2 \
  --max-update 300000 \
  --dataset-impl lazy

Roberta working results:

2020-08-20 12:55:49 | WARNING | root | TPU has started up successfully with version pytorch-1.6
2020-08-20 12:55:57 | WARNING | root | TPU has started up successfully with version pytorch-1.6
2020-08-20 12:56:08 | INFO | fairseq_cli.train | Namespace(activation_dropout=0.0, activation_fn='gelu', adam_betas='(0.9,0.98)', adam_eps=1e-06, all_gather_list_size=16384, arch='roberta_base', attention_dropout=0.1, best_checkpoint_metric='loss', bf16=False, bpe=None, broadcast_buffers=False, bucket_cap_mb=25, checkpoint_suffix='', clip_norm=0.0, cpu=False, criterion='masked_lm', curriculum=0, data='dataset/uniref50', data_buffer_size=10, dataset_impl=None, ddp_backend='c10d', device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method=None, distributed_no_spawn=False, distributed_port=-1, distributed_rank=0, distributed_world_size=8, distributed_wrapper='DDP', dropout=0.1, empty_cache_freq=0, encoder_attention_heads=12, encoder_embed_dim=768, encoder_ffn_embed_dim=3072, encoder_layerdrop=0, encoder_layers=12, encoder_layers_to_keep=None, end_learning_rate=0.0, fast_stat_sync=False, find_unused_parameters=False, finetune_from_model=None, fix_batches_to_gpus=False, fixed_validation_seed=None, force_anneal=None, fp16=False, fp16_init_scale=128, fp16_no_flatten_grads=False, fp16_scale_tolerance=0.0, fp16_scale_window=None, freq_weighted_replacement=False, keep_best_checkpoints=-1, keep_interval_updates=-1, keep_last_epochs=-1, leave_unmasked_prob=0.1, localsgd_frequency=3, log_format='simple', log_interval=1, lr=[0.0005], lr_scheduler='polynomial_decay', mask_prob=0.15, mask_whole_words=False, max_epoch=0, max_sentences=1, max_sentences_valid=1, max_tokens=None, max_tokens_valid=None, max_update=125000, maximize_best_checkpoint_metric=False, memory_efficient_bf16=False, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=-1, model_parallel_size=1, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_save=False, no_save_optimizer_state=False, no_seed_provided=True, nprocs_per_node=1, num_workers=1, optimizer='adam', optimizer_overrides='{}', patience=-1, pooler_activation_fn='tanh', pooler_dropout=0.0, power=1.0, profile=False, quant_noise_pq=0, quant_noise_pq_block_size=8, quant_noise_scalar=0, quantization_config_path=None, random_token_prob=0.1, required_batch_size_multiple=8, reset_dataloader=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', sample_break_mode='complete', save_dir='checkpoints', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, shorten_data_split_list='', shorten_method='none', skip_invalid_size_inputs_valid_test=False, slowmo_algorithm='LocalSGD', slowmo_momentum=None, stop_time_hours=0, task='masked_lm', tensorboard_logdir='', threshold_loss_scale=None, tokenizer=None, tokens_per_sample=512, total_num_update=125000, tpu=True, train_subset='train', update_freq=[16], use_bmuf=False, use_old_adam=False, user_dir=None, valid_subset='valid', validate_after_updates=0, validate_interval=1, validate_interval_updates=0, warmup_updates=10000, weight_decay=0.01)
2020-08-20 12:56:08 | INFO | fairseq.tasks.masked_lm | dictionary: 24 types
2020-08-20 12:56:08 | INFO | fairseq.data.data_utils | loaded 144 examples from: dataset/uniref50/valid
2020-08-20 12:56:08 | INFO | fairseq.tasks.masked_lm | loaded 112 blocks from: dataset/uniref50/valid
2020-08-20 12:56:35 | INFO | fairseq_cli.train | RobertaModel(
  (encoder): RobertaEncoder(
    (sentence_encoder): TransformerSentenceEncoder(
      (dropout_module): FairseqDropout()
      (embed_tokens): Embedding(25, 768, padding_idx=1)
      (embed_positions): LearnedPositionalEmbedding(514, 768, padding_idx=1)
      (layers): ModuleList(
        (0): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (1): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (2): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (3): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (4): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (5): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (6): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (7): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (8): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (9): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (10): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
        (11): TransformerSentenceEncoderLayer(
          (dropout_module): FairseqDropout()
          (activation_dropout_module): FairseqDropout()
          (self_attn): MultiheadAttention(
            (dropout_module): FairseqDropout()
            (k_proj): Linear(in_features=768, out_features=768, bias=True)
            (v_proj): Linear(in_features=768, out_features=768, bias=True)
            (q_proj): Linear(in_features=768, out_features=768, bias=True)
            (out_proj): Linear(in_features=768, out_features=768, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        )
      )
      (emb_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
    )
    (lm_head): RobertaLMHead(
      (dense): Linear(in_features=768, out_features=768, bias=True)
      (layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
    )
  )
  (classification_heads): ModuleDict()
)
2020-08-20 12:56:35 | INFO | fairseq_cli.train | task: masked_lm (MaskedLMTask)
2020-08-20 12:56:35 | INFO | fairseq_cli.train | model: roberta_base (RobertaModel)
2020-08-20 12:56:35 | INFO | fairseq_cli.train | criterion: masked_lm (MaskedLmLoss)
2020-08-20 12:56:35 | INFO | fairseq_cli.train | num. model params: 86062105 (num. trained: 86062105)
2020-08-20 12:56:59 | INFO | fairseq.trainer | detected shared parameter: encoder.sentence_encoder.embed_tokens.weight <- encoder.lm_head.weight
2020-08-20 12:56:59 | INFO | fairseq_cli.train | training on 8 devices (GPUs/TPUs)
2020-08-20 12:56:59 | INFO | fairseq_cli.train | max tokens per GPU = None and max sentences per GPU = 1
2020-08-20 12:56:59 | INFO | fairseq.trainer | no existing checkpoint found checkpoints/checkpoint_last.pt
2020-08-20 12:56:59 | INFO | fairseq.trainer | loading train data for epoch 1
2020-08-20 12:56:59 | INFO | fairseq.data.data_utils | loaded 144 examples from: dataset/uniref50/train
2020-08-20 12:56:59 | INFO | fairseq.tasks.masked_lm | loaded 112 blocks from: dataset/uniref50/train
2020-08-20 12:56:59 | INFO | fairseq.trainer | begin training epoch 1
2020-08-20 12:59:05 | INFO | root | NOTE: XLA compilation detected; too many of these can lead to slow training, but we expect a few in the beginning
2020-08-20 12:59:05 | INFO | train_inner | epoch 001:      1 / 1 loss=4.776, ppl=27.39, wps=0, ups=0, wpb=39632, bsz=112, num_updates=1, lr=5e-08, gnorm=5.505, train_wall=118, wall=126
2020-08-20 12:59:05 | INFO | fairseq_cli.train | begin validation on "valid" subset
2020-08-20 13:00:26 | INFO | valid | epoch 001 | valid on 'valid' subset | loss 4.776 | ppl 27.4 | wps 537.5 | wpb 2830.9 | bsz 8 | num_updates 1
2020-08-20 13:00:26 | INFO | fairseq_cli.train | begin save checkpoint
2020-08-20 13:00:57 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints/checkpoint1.pt (epoch 1 @ 1 updates, score 4.776) (writing took 31.607709737999812 seconds)
2020-08-20 13:00:57 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below)
2020-08-20 13:00:57 | INFO | train | epoch 001 | loss 4.776 | ppl 27.39 | wps 0 | ups 0 | wpb 39632 | bsz 112 | num_updates 1 | lr 5e-08 | gnorm 5.505 | train_wall 118 | wall 238
2020-08-20 13:00:57 | INFO | fairseq.trainer | begin training epoch 2
2020-08-20 13:01:55 | INFO | root | NOTE: XLA compilation detected; too many of these can lead to slow training, but we expect a few in the beginning
2020-08-20 13:01:55 | INFO | train_inner | epoch 002:      1 / 1 loss=4.765, ppl=27.2, wps=233.3, ups=0.01, wpb=39632, bsz=112, num_updates=2, lr=1e-07, gnorm=5.435, train_wall=51, wall=296
2020-08-20 13:01:55 | INFO | fairseq_cli.train | begin validation on "valid" subset
2020-08-20 13:02:14 | INFO | valid | epoch 002 | valid on 'valid' subset | loss 4.781 | ppl 27.49 | wps 3777.8 | wpb 2830.9 | bsz 8 | num_updates 2 | best_loss 4.776
2020-08-20 13:02:14 | INFO | fairseq_cli.train | begin save checkpoint

Megatron errors:

2020-08-20 14:04:40 | WARNING | root | TPU has started up successfully with version pytorch-1.6
2020-08-20 14:04:48 | WARNING | root | TPU has started up successfully with version pytorch-1.6
Exception in device=TPU:2: Default process group is not initialized
Exception in device=TPU:4: Default process group is not initialized
Exception in device=TPU:7: Default process group is not initialized
Exception in device=TPU:1: Default process group is not initialized
Exception in device=TPU:5: Default process group is not initialized
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
AssertionError: Default process group is not initialized
Exception in device=TPU:3: Default process group is not initialized
Exception in device=TPU:6: Default process group is not initialized
Exception in device=TPU:0: Default process group is not initialized
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
AssertionError: Default process group is not initialized
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
AssertionError: Default process group is not initialized
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
AssertionError: Default process group is not initialized
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
AssertionError: Default process group is not initialized
AssertionError: Default process group is not initialized
AssertionError: Default process group is not initialized
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
    fn(gindex, *args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 150, in distributed_main
    args.distributed_rank = distributed_init(args)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 136, in distributed_init
    initialize_model_parallel(args.model_parallel_size)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/model_parallel/megatron/mpu/initialize.py", line 49, in initialize_model_parallel
    if torch.distributed.get_rank() == 0:
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
    _check_default_pg()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
    "Default process group is not initialized"
AssertionError: Default process group is not initialized
Traceback (most recent call last):
  File "/usr/local/bin/fairseq-train", line 8, in <module>
    sys.exit(cli_main())
  File "/usr/local/lib/python3.6/dist-packages/fairseq_cli/train.py", line 333, in cli_main
    distributed_utils.call_main(args, main)
  File "/usr/local/lib/python3.6/dist-packages/fairseq/distributed_utils.py", line 185, in call_main
    nprocs=8,  # use all 8 TPU cores
  File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 296, in spawn
    start_method=start_method)
  File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
    while not context.join():
  File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 113, in join
    (error_index, exitcode)
Exception: process 0 terminated with exit code 17

Any idea how we could fix this issue ?

Your reply is highly appreciated.

myleott commented 4 years ago

I am working on model parallel TPU support, it's almost ready :)

agemagician commented 4 years ago

Thanks @myleott for your quick reply.

We are really waiting for it.

Do you have any timeline as we need to schedule the models that we will train for our project?

myleott commented 4 years ago

There's a branch which "runs," but there's something wrong the way we init/modify RNG state, since it seems to converge poorly compared to similar runs on GPU.

I'm hoping to dig into the discrepancy in the next couple weeks.

agemagician commented 4 years ago

Perfect, we hope that it will be finished soon. We will put it into our schedule as it should be ready soon from your side.

neel04 commented 4 years ago

Any update about the TPU compatibility?

myleott commented 4 years ago

Hey, unfortunately this will be a bit delayed. We’re migrating to use fairscale as the backend for this particular code, so we’ll need to update the code there for TPU support.

wintersurvival commented 3 years ago

Because you did not set the arguement "model-parallel-size" when using roberta, and you would not use intra-layer model parallel.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale. If this issue is still affecting you, please leave any comment (for example, "bump"), and we'll keep it open. We are sorry that we haven't been able to prioritize it yet. If you have any new additional information, please include it with your comment!

Hizhaoyuan commented 3 years ago

Looking forward to support for TPU

myleott commented 3 years ago

You can use dev_tpu_mp branch:

Screen Shot 2021-08-23 at 1 23 27 PM

I haven't tested it recently, and it may no longer work with the latest XLA code. Unfortunately this is not a direction we are prioritizing at this time, so we can't provide much support.