bigcode-project / Megatron-LM

Ongoing research training transformer models at scale
Other
371 stars 48 forks source link

OOM while merging starcoder model (after sft) from TP=4,PP=4 to TP=8,PP=1 #69

Closed mintsugaEHEH closed 1 year ago

mintsugaEHEH commented 1 year ago

I try to reshape a fine-tuned checkpoint of starcoder(https://huggingface.co/bigcode/starcoder-megatron/tree/main) from TP=4,PP=4 to TP=8,PP=1 using tools/checkpoint_util.py, but I encountered a memory OOM issue. The machine I used has a memory of 512GB which is capable of loading the whole model. Any solution to solve this issue?

Here's my log before OOM (I added some debugging output to check the process). it seems like the ckpt loader is sending network layers to the saver.

Loaded checkpoint_loader_megatron as the loader.
Loaded checkpoint_saver_megatron as the saver.
Starting saver...
Starting loader...
Wandb import failed
Wandb import failed
/opt/conda/lib/python3.9/site-packages/apex/pyprof/__init__.py:5: FutureWarning: pyprof will be removed by the end of June, 2022
  warnings.warn("pyprof will be removed by the end of June, 2022", FutureWarning)
/opt/conda/lib/python3.9/site-packages/apex/pyprof/__init__.py:5: FutureWarning: pyprof will be removed by the end of June, 2022
  warnings.warn("pyprof will be removed by the end of June, 2022", FutureWarning)
Setting num_layers to 40 from checkpoint
Setting hidden_size to 6144 from checkpoint
Setting ffn_hidden_size to 24576 from checkpoint
Setting seq_length to 2048 from checkpoint
Setting num_attention_heads to 48 from checkpoint
Setting kv_channels to 128 from checkpoint
Setting max_position_embeddings to 8192 from checkpoint
Checkpoint did not provide arguments add_position_embedding
Checkpoint did not provide arguments use_rotary_position_embeddings
Checkpoint did not provide arguments rotary_percent
Checkpoint did not provide arguments add_bias_linear
Checkpoint did not provide arguments swiglu
Checkpoint did not provide arguments untie_embeddings_and_output_weights
Checkpoint did not provide arguments apply_layernorm_1p
Setting tokenizer_type to TokenizerFromFile from checkpoint
Setting padded_vocab_size to 49152 from checkpoint
Setting attention_head_type to multiquery from checkpoint
Setting tensor_model_parallel_size to 4 from checkpoint
Setting pipeline_model_parallel_size to 4 from checkpoint
Checkpoint did not provide arguments virtual_pipeline_model_parallel_size
Checkpoint did not provide arguments num_layers_per_virtual_pipeline_stage
using world size: 16, data-parallel-size: 1, tensor-model-parallel size: 4, pipeline-model-parallel size: 4 
setting global batch size to 1
using torch.float32 for parameters ...
------------------------ arguments ------------------------
  accumulate_allreduce_grads_in_fp32 .............. False
  adam_beta1 ...................................... 0.9
  adam_beta2 ...................................... 0.999
  adam_eps ........................................ 1e-08
  add_bias_linear ................................. True
  add_position_embedding .......................... True
  adlr_autoresume ................................. False
  adlr_autoresume_interval ........................ 1000
  apply_layernorm_1p .............................. False
  apply_query_key_layer_scaling ................... True
  apply_residual_connection_post_layernorm ........ False
  async_tensor_model_parallel_allreduce ........... False
  attention_dropout ............................... 0.1
  attention_head_type ............................. multiquery
  attention_softmax_in_fp32 ....................... False
  barrier_with_L1_time ............................ True
  bert_binary_head ................................ True
  bert_embedder_type .............................. megatron
  bert_load ....................................... None
  bf16 ............................................ False
  bias_dropout_fusion ............................. False
  bias_gelu_fusion ................................ False
  biencoder_projection_dim ........................ 0
  biencoder_shared_query_context_model ............ False
  block_data_path ................................. None
  classes_fraction ................................ 1.0
  clip_grad ....................................... 1.0
  consumed_train_samples .......................... 0
  consumed_valid_samples .......................... 0
  data_impl ....................................... infer
  data_parallel_random_init ....................... False
  data_parallel_size .............................. 1
  data_path ....................................... None
  data_per_class_fraction ......................... 1.0
  data_sharding ................................... True
  dataloader_type ................................. single
  DDP_impl ........................................ local
  decoder_num_layers .............................. None
  decoder_seq_length .............................. None
  dino_bottleneck_size ............................ 256
  dino_freeze_last_layer .......................... 1
  dino_head_hidden_size ........................... 2048
  dino_local_crops_number ......................... 10
  dino_local_img_size ............................. 96
  dino_norm_last_layer ............................ False
  dino_teacher_temp ............................... 0.07
  dino_warmup_teacher_temp ........................ 0.04
  dino_warmup_teacher_temp_epochs ................. 30
  distribute_saved_activations .................... False
  distributed_backend ............................. nccl
  distributed_timeout ............................. 600
  distributed_timeout_minutes ..................... 10
  embedding_path .................................. None
  empty_unused_memory_level ....................... 0
  encoder_num_layers .............................. 40
  encoder_seq_length .............................. 2048
  end_weight_decay ................................ 0.01
  eod_mask_loss ................................... False
  eval_interval ................................... 1000
  eval_iters ...................................... 100
  evidence_data_path .............................. None
  exit_duration_in_mins ........................... None
  exit_interval ................................... None
  exit_on_missing_checkpoint ...................... False
  exit_signal_handler ............................. False
  ffn_hidden_size ................................. 24576
  fim_rate ........................................ 0.0
  fim_spm_rate .................................... 0.5
  finetune ........................................ False
  fp16 ............................................ False
  fp16_lm_cross_entropy ........................... False
  fp32_residual_connection ........................ False
  fp8_amax_compute_algo ........................... most_recent
  fp8_amax_history_len ............................ 1
  fp8_e4m3 ........................................ False
  fp8_hybrid ...................................... False
  fp8_interval .................................... 1
  fp8_margin ...................................... 0
  fp8_wgrad ....................................... True
  global_batch_size ............................... 1
  glu_activation .................................. None
  gradient_accumulation_fusion .................... True
  head_lr_mult .................................... 1.0
  hidden_dropout .................................. 0.1
  hidden_size ..................................... 6144
  hysteresis ...................................... 2
  ict_head_size ................................... None
  ict_load ........................................ None
  img_h ........................................... 224
  img_w ........................................... 224
  indexer_batch_size .............................. 128
  indexer_log_interval ............................ 1000
  inference_batch_times_seqlen_threshold .......... 512
  init_method_std ................................. 0.02
  init_method_xavier_uniform ...................... False
  initial_loss_scale .............................. 4294967296
  iter_per_epoch .................................. 1250
  iteration ....................................... xxx
  kv_channels ..................................... 128
  layernorm_epsilon ............................... 1e-05
  lazy_mpu_init ................................... None
  load ............................................ xxx
  local_rank ...................................... 0
  log_batch_size_to_tensorboard ................... False
  log_interval .................................... 100
  log_learning_rate_to_tensorboard ................ True
  log_loss_scale_to_tensorboard ................... True
  log_memory_to_tensorboard ....................... False
  log_num_zeros_in_grad ........................... False
  log_params_norm ................................. False
  log_timers_to_tensorboard ....................... False
  log_validation_ppl_to_tensorboard ............... False
  log_world_size_to_tensorboard ................... False
  loss_scale ...................................... None
  loss_scale_window ............................... 1000
  lr .............................................. None
  lr_decay_iters .................................. None
  lr_decay_samples ................................ None
  lr_decay_style .................................. linear
  lr_warmup_fraction .............................. None
  lr_warmup_iters ................................. 0
  lr_warmup_samples ............................... 0
  make_vocab_size_divisible_by .................... 128
  mask_factor ..................................... 1.0
  mask_prob ....................................... 0.15
  mask_type ....................................... random
  masked_softmax_fusion ........................... False
  max_position_embeddings ......................... 8192
  max_tokens_to_oom ............................... 12000
  merge_file ...................................... None
  micro_batch_size ................................ 1
  min_loss_scale .................................. 1.0
  min_lr .......................................... 0.0
  mmap_warmup ..................................... False
  no_load_optim ................................... True
  no_load_rng ..................................... True
  no_persist_layer_norm ........................... False
  no_save_optim ................................... True
  no_save_rng ..................................... True
  num_attention_heads ............................. 48
  num_channels .................................... 3
  num_classes ..................................... 1000
  num_experts ..................................... None
  num_layers ...................................... 40
  num_layers_per_virtual_pipeline_stage ........... None
  num_workers ..................................... 2
  onnx_safe ....................................... None
  openai_gelu ..................................... False
  optimizer ....................................... adam
  output_bert_embeddings .......................... False
  override_opt_param_scheduler .................... False
  padded_vocab_size ............................... 49152
  params_dtype .................................... torch.float32
  patch_dim ....................................... 16
  perform_initialization .......................... False
  pipeline_model_parallel_size .................... 4
  pipeline_model_parallel_split_rank .............. None
  position_embedding_type ......................... PositionEmbeddingType.absolute
  query_in_block_prob ............................. 0.1
  rampup_batch_size ............................... None
  rank ............................................ 0
  recompute_granularity ........................... None
  recompute_method ................................ None
  recompute_num_layers ............................ 1
  reset_attention_mask ............................ False
  reset_position_ids .............................. False
  retriever_report_topk_accuracies ................ []
  retriever_score_scaling ......................... False
  retriever_seq_length ............................ 256
  retro_add_retriever ............................. False
  retro_cyclic_train_iters ........................ None
  retro_encoder_attention_dropout ................. 0.1
  retro_encoder_hidden_dropout .................... 0.1
  retro_encoder_layers ............................ 2
  retro_num_neighbors ............................. 2
  retro_num_retrieved_chunks ...................... 2
  retro_return_doc_ids ............................ False
  retro_workdir ................................... None
  rotary_percent .................................. 1.0
  sample_rate ..................................... 1.0
  save ............................................ None
  save_interval ................................... None
  scatter_gather_tensors_in_pipeline .............. True
  seed ............................................ 1234
  seq_length ...................................... 2048
  sequence_parallel ............................... False
  sgd_momentum .................................... 0.9
  short_seq_prob .................................. 0.1
  split ........................................... None
  squared_relu .................................... False
  standalone_embedding_stage ...................... False
  start_weight_decay .............................. 0.01
  structured_logs ................................. False
  structured_logs_dir ............................. None
  swiglu .......................................... False
  swin_backbone_type .............................. tiny
  tensor_model_parallel_size ...................... 4
  tensorboard_dir ................................. None
  tensorboard_log_interval ........................ 1
  tensorboard_queue_size .......................... 1000
  test_data_path .................................. None
  test_weighted_split_paths ....................... None
  test_weighted_split_paths_path .................. None
  timing_log_level ................................ 0
  timing_log_option ............................... minmax
  titles_data_path ................................ None
  tokenizer_file .................................. None
  tokenizer_model ................................. None
  tokenizer_type .................................. TokenizerFromFile
  train_data_path ................................. None
  train_iters ..................................... None
  train_samples ................................... None
  train_weighted_split_paths ...................... None
  train_weighted_split_paths_path ................. None
  transformer_impl ................................ local
  transformer_pipeline_model_parallel_size ........ 4
  transformer_timers .............................. False
  untie_embeddings_and_output_weights ............. False
  use_checkpoint_args ............................. False
  use_checkpoint_opt_param_scheduler .............. False
  use_contiguous_buffers_in_local_ddp ............. True
  use_cpu_initialization .......................... True
  use_distributed_optimizer ....................... False
  use_flash_attn .................................. False
  use_one_sent_docs ............................... False
  use_ring_exchange_p2p ........................... False
  use_rotary_position_embeddings .................. False
  valid_data_path ................................. None
  valid_num_workers ............................... 2
  valid_weighted_split_paths ...................... None
  valid_weighted_split_paths_path ................. None
  variable_seq_lengths ............................ False
  virtual_pipeline_model_parallel_size ............ None
  vision_backbone_type ............................ vit
  vision_pretraining .............................. False
  vision_pretraining_type ......................... classify
  vocab_extra_ids ................................. 0
  vocab_file ...................................... None
  vocab_size ...................................... None
  wandb_entity_name ............................... None
  wandb_project_name .............................. None
  weight_decay .................................... 0.01
  weight_decay_incr_style ......................... constant
  world_size ...................................... 16
-------------------- end of arguments ---------------------
Wandb import failed
setting number of micro-batches to constant 1
running on CUDA devices
loading rank 0 / count 4
building GPT model ...
 loading checkpoint from xxx at iteration xxx
 checkpoint version 3.0
  successfully loaded checkpoint from xxx at iteration xxx
loading rank 1 / count 4
building GPT model ...
 loading checkpoint from xxx at iteration xxx
 checkpoint version 3.0
  successfully loaded checkpoint from xxx at iteration xxx
loading rank 2 / count 4
building GPT model ...
 loading checkpoint from xxx at iteration xxx
 checkpoint version 3.0
  successfully loaded checkpoint from xxx at iteration xxx
loading rank 3 / count 4
building GPT model ...
 loading checkpoint from xxx at iteration xxx
 checkpoint version 3.0
  successfully loaded checkpoint from xxx at iteration xxx

Overwriting default ffn_hidden_size value None with value from checkpoint 24576.
Overwriting default kv_channels value None with value from checkpoint 128.
Overwriting default micro_batch_size value 1 with value from checkpoint 2.
Overwriting default global_batch_size value None with value from checkpoint 64.
Overwriting default log_interval value 100 with value from checkpoint 10.
Overwriting default tensorboard_dir value None with value from checkpoint xxx/tensorboard/.
Overwriting default dataloader_type value None with value from checkpoint single.
Overwriting default lr value None with value from checkpoint 1e-05.
Overwriting default lr_decay_style value linear with value from checkpoint cosine.
Overwriting default min_lr value 0.0 with value from checkpoint 1e-06.
Overwriting default load value None with value from checkpoint xxx/starcoder-megatron.
Checkpoint had argument load_step but new arguments does not have this.
Checkpoint had argument finetune_from but new arguments does not have this.
Overwriting default bf16 value False with value from checkpoint True.
Overwriting default accumulate_allreduce_grads_in_fp32 value False with value from checkpoint True.
Overwriting default local_rank value None with value from checkpoint 0.
Overwriting default eval_iters value 100 with value from checkpoint 10.
Overwriting default eval_interval value 1000 with value from checkpoint 5000.
Overwriting default data_path value None with value from checkpoint ['xx/data/sft_experiments/xxx'].
Overwriting default split value None with value from checkpoint 998,1,1.
Overwriting default merge_file value None with value from checkpoint ./experiments/cro/starcoder/merges.txt.
Overwriting default data_impl value infer with value from checkpoint mmap.
Overwriting default log_validation_ppl_to_tensorboard value False with value from checkpoint True.
Overwriting default world_size value 8 with value from checkpoint 16.
Checkpoint had argument transformer_pipeline_model_parallel_size but new arguments does not have this.
Checkpoint had argument data_parallel_size but new arguments does not have this.
Checkpoint had argument valid_weighted_split_names but new arguments does not have this.
Checkpoint had argument valid_weighted_split_weights but new arguments does not have this.
Checkpoint had argument valid_weighted_split_splits but new arguments does not have this.
Checkpoint had argument test_weighted_split_names but new arguments does not have this.
Checkpoint had argument test_weighted_split_weights but new arguments does not have this.
Checkpoint had argument test_weighted_split_splits but new arguments does not have this.
Checkpoint had argument consumed_train_samples but new arguments does not have this.
Checkpoint had argument consumed_valid_samples but new arguments does not have this.
Checkpoint had argument padded_vocab_size but new arguments does not have this.
Checkpoint had argument model_type but new arguments does not have this.
Checkpoint had argument iteration but new arguments does not have this.
Checkpoint had argument do_train but new arguments does not have this.
Checkpoint had argument do_valid but new arguments does not have this.
Checkpoint had argument do_test but new arguments does not have this.
Checkpoint had argument curr_iteration but new arguments does not have this.
using world size: 16, data-parallel-size: 2, tensor-model-parallel size: 4, pipeline-model-parallel size: 2 
using torch.bfloat16 for parameters ...
------------------------ arguments ------------------------
  accumulate_allreduce_grads_in_fp32 .............. True
  adam_beta1 ...................................... 0.9
  adam_beta2 ...................................... 0.999
  adam_eps ........................................ 1e-08
  add_bias_linear ................................. True
  add_position_embedding .......................... True
  adlr_autoresume ................................. False
  adlr_autoresume_interval ........................ 1000
  apply_layernorm_1p .............................. False
  apply_query_key_layer_scaling ................... True
  apply_residual_connection_post_layernorm ........ False
  async_tensor_model_parallel_allreduce ........... False
  attention_dropout ............................... 0.1
  attention_head_type ............................. multiquery
  attention_softmax_in_fp32 ....................... False
  barrier_with_L1_time ............................ True
  bert_binary_head ................................ True
  bert_embedder_type .............................. megatron
  bert_load ....................................... None
  bf16 ............................................ True
  bias_dropout_fusion ............................. False
  bias_gelu_fusion ................................ False
  biencoder_projection_dim ........................ 0
  biencoder_shared_query_context_model ............ False
  block_data_path ................................. None
  classes_fraction ................................ 1.0
  clip_grad ....................................... 1.0
  consumed_train_samples .......................... 0
  consumed_valid_samples .......................... 0
  data_impl ....................................... mmap
  data_parallel_random_init ....................... False
  data_parallel_size .............................. 2
  data_path ....................................... ['xx/data/sft_experiments/xxx']
  data_per_class_fraction ......................... 1.0
  data_sharding ................................... True
  dataloader_type ................................. single
  DDP_impl ........................................ local
  decoder_num_layers .............................. None
  decoder_seq_length .............................. None
  dino_bottleneck_size ............................ 256
  dino_freeze_last_layer .......................... 1
  dino_head_hidden_size ........................... 2048
  dino_local_crops_number ......................... 10
  dino_local_img_size ............................. 96
  dino_norm_last_layer ............................ False
  dino_teacher_temp ............................... 0.07
  dino_warmup_teacher_temp ........................ 0.04
  dino_warmup_teacher_temp_epochs ................. 30
  distribute_saved_activations .................... False
  distributed_backend ............................. nccl
  distributed_timeout ............................. 600
  distributed_timeout_minutes ..................... 10
  embedding_path .................................. None
  empty_unused_memory_level ....................... 0
  encoder_num_layers .............................. 40
  encoder_seq_length .............................. 2048
  end_weight_decay ................................ 0.01
  eod_mask_loss ................................... False
  eval_interval ................................... 5000
  eval_iters ...................................... 10
  evidence_data_path .............................. None
  exit_duration_in_mins ........................... None
  exit_interval ................................... None
  exit_on_missing_checkpoint ...................... False
  exit_signal_handler ............................. False
  ffn_hidden_size ................................. 24576
  fim_rate ........................................ 0.0
  fim_spm_rate .................................... 0.5
  finetune ........................................ False
  fp16 ............................................ False
  fp16_lm_cross_entropy ........................... False
  fp32_residual_connection ........................ False
  fp8_amax_compute_algo ........................... most_recent
  fp8_amax_history_len ............................ 1
  fp8_e4m3 ........................................ False
  fp8_hybrid ...................................... False
  fp8_interval .................................... 1
  fp8_margin ...................................... 0
  fp8_wgrad ....................................... True
  global_batch_size ............................... 64
  glu_activation .................................. None
  gradient_accumulation_fusion .................... True
  head_lr_mult .................................... 1.0
  hidden_dropout .................................. 0.1
  hidden_size ..................................... 6144
  hysteresis ...................................... 2
  ict_head_size ................................... None
  ict_load ........................................ None
  img_h ........................................... 224
  img_w ........................................... 224
  indexer_batch_size .............................. 128
  indexer_log_interval ............................ 1000
  inference_batch_times_seqlen_threshold .......... 512
  init_method_std ................................. 0.02
  init_method_xavier_uniform ...................... False
  initial_loss_scale .............................. 4294967296
  iter_per_epoch .................................. 1250
  kv_channels ..................................... 128
  layernorm_epsilon ............................... 1e-05
  lazy_mpu_init ................................... None
  load ............................................ xxx/starcoder-megatron
  local_rank ...................................... 0
  log_batch_size_to_tensorboard ................... False
  log_interval .................................... 10
  log_learning_rate_to_tensorboard ................ True
  log_loss_scale_to_tensorboard ................... True
  log_memory_to_tensorboard ....................... False
  log_num_zeros_in_grad ........................... False
  log_params_norm ................................. False
  log_timers_to_tensorboard ....................... False
  log_validation_ppl_to_tensorboard ............... True
  log_world_size_to_tensorboard ................... False
  loss_scale ...................................... None
  loss_scale_window ............................... 1000
  lr .............................................. 1e-05
  lr_decay_iters .................................. None
  lr_decay_samples ................................ None
  lr_decay_style .................................. cosine
  lr_warmup_fraction .............................. None
  lr_warmup_iters ................................. 0
  lr_warmup_samples ............................... 0
  make_vocab_size_divisible_by .................... 128
  mask_factor ..................................... 1.0
  mask_prob ....................................... 0.15
  mask_type ....................................... random
  masked_softmax_fusion ........................... False
  max_position_embeddings ......................... 8192
  max_tokens_to_oom ............................... 12000
  merge_file ...................................... ./experiments/cro/starcoder/merges.txt
  micro_batch_size ................................ 2
  min_loss_scale .................................. 1.0
  min_lr .......................................... 1e-06
  mmap_warmup ..................................... False
  no_load_optim ................................... True
  no_load_rng ..................................... True
  no_persist_layer_norm ........................... False
  no_save_optim ................................... True
  no_save_rng ..................................... True
  num_attention_heads ............................. 48
  num_channels .................................... 3
  num_classes ..................................... 1000
  num_experts ..................................... None
  num_layers ...................................... 40
  num_layers_per_virtual_pipeline_stage ........... None
  num_workers ..................................... 2
  onnx_safe ....................................... None
  openai_gelu ..................................... False
  optimizer ....................................... adam
  output_bert_embeddings .......................... False
  override_opt_param_scheduler .................... False
  params_dtype .................................... torch.bfloat16
  patch_dim ....................................... 16
  perform_initialization .......................... False
  pipeline_model_parallel_size .................... 2
  pipeline_model_parallel_split_rank .............. None
  position_embedding_type ......................... PositionEmbeddingType.absolute
  query_in_block_prob ............................. 0.1
  rampup_batch_size ............................... None
  rank ............................................ 0
  recompute_granularity ........................... None
  recompute_method ................................ None
  recompute_num_layers ............................ 1
  reset_attention_mask ............................ False
  reset_position_ids .............................. False
  retriever_report_topk_accuracies ................ []
  retriever_score_scaling ......................... False
  retriever_seq_length ............................ 256
  retro_add_retriever ............................. False
  retro_cyclic_train_iters ........................ None
  retro_encoder_attention_dropout ................. 0.1
  retro_encoder_hidden_dropout .................... 0.1
  retro_encoder_layers ............................ 2
  retro_num_neighbors ............................. 2
  retro_num_retrieved_chunks ...................... 2
  retro_return_doc_ids ............................ False
  retro_workdir ................................... None
  rotary_percent .................................. 1.0
  sample_rate ..................................... 1.0
  save ............................................ xxx
  save_interval ................................... 1
  scatter_gather_tensors_in_pipeline .............. True
  seed ............................................ 1234
  seq_length ...................................... 2048
  sequence_parallel ............................... False
  sgd_momentum .................................... 0.9
  short_seq_prob .................................. 0.1
  split ........................................... 998,1,1
  squared_relu .................................... False
  standalone_embedding_stage ...................... False
  start_weight_decay .............................. 0.01
  structured_logs ................................. False
  structured_logs_dir ............................. None
  swiglu .......................................... False
  swin_backbone_type .............................. tiny
  tensor_model_parallel_size ...................... 4
  tensorboard_dir ................................. xxx/tensorboard/
  tensorboard_log_interval ........................ 1
  tensorboard_queue_size .......................... 1000
  test_data_path .................................. None
  test_weighted_split_names ....................... None
  test_weighted_split_paths ....................... None
  test_weighted_split_paths_path .................. None
  test_weighted_split_splits ...................... None
  test_weighted_split_weights ..................... None
  timing_log_level ................................ 0
  timing_log_option ............................... minmax
  titles_data_path ................................ None
  tokenizer_file .................................. None
  tokenizer_model ................................. None
  tokenizer_type .................................. TokenizerFromFile
  train_data_path ................................. None
  train_iters ..................................... None
  train_samples ................................... None
  train_weighted_split_paths ...................... None
  train_weighted_split_paths_path ................. None
  transformer_impl ................................ local
  transformer_pipeline_model_parallel_size ........ 2
  transformer_timers .............................. False
  untie_embeddings_and_output_weights ............. False
  use_checkpoint_args ............................. False
  use_checkpoint_opt_param_scheduler .............. False
  use_contiguous_buffers_in_local_ddp ............. True
  use_cpu_initialization .......................... True
  use_distributed_optimizer ....................... False
  use_flash_attn .................................. False
  use_one_sent_docs ............................... False
  use_ring_exchange_p2p ........................... False
  use_rotary_position_embeddings .................. False
  valid_data_path ................................. None
  valid_num_workers ............................... 2
  valid_weighted_split_names ...................... None
  valid_weighted_split_paths ...................... None
  valid_weighted_split_paths_path ................. None
  valid_weighted_split_splits ..................... None
  valid_weighted_split_weights .................... None
  variable_seq_lengths ............................ False
  virtual_pipeline_model_parallel_size ............ None
  vision_backbone_type ............................ vit
  vision_pretraining .............................. False
  vision_pretraining_type ......................... classify
  vocab_extra_ids ................................. 0
  vocab_file ...................................... None
  vocab_size ...................................... None
  wandb_entity_name ............................... None
  wandb_project_name .............................. None
  weight_decay .................................... 0.01
  weight_decay_incr_style ......................... constant
  world_size ...................................... 16
-------------------- end of arguments ---------------------
setting number of micro-batches to constant 16
Setting consumed_train_samples to None and consumed_valid_samples to None
Wandb import failed
running on CUDA devices
sending embeddings
sending transformer layer 0
received embeddings
Original vocab size not specified, leaving embedding table as-is. If you've changed the tensor parallel size this could cause problems.
building GPT model ...
sending transformer layer 1
sending transformer layer 2
WARNING! Distributed processes aren't initialized, so word embeddings in the last layer are not initialized. If you are just manipulating a model this is fine, but this needs to be handled manually. If you are training something is definitely wrong.
sending transformer layer 3
sending transformer layer 4
sending transformer layer 5
sending transformer layer 6
sending transformer layer 7
sending transformer layer 8
sending transformer layer 9
loading rank 0 / count 4
building GPT model ...
 loading checkpoint from xxx at iteration xxx
building GPT model ...
building GPT model ...
building GPT model ...
debugging in saver, get transformers layer from queue ...
received transformer layer 0
debugging in saver, get transformers layer from queue ...
received transformer layer 1
debugging in saver, get transformers layer from queue ...
received transformer layer 2
debugging in saver, get transformers layer from queue ...
received transformer layer 3
debugging in saver, get transformers layer from queue ...
received transformer layer 4
debugging in saver, get transformers layer from queue ...
received transformer layer 5
debugging in saver, get transformers layer from queue ...
received transformer layer 6
debugging in saver, get transformers layer from queue ...
received transformer layer 7
debugging in saver, get transformers layer from queue ...
received transformer layer 8
debugging in saver, get transformers layer from queue ...
received transformer layer 9
debugging in saver, get transformers layer from queue ...
 checkpoint version 3.0
  successfully loaded checkpoint from xxx at iteration xxx
loading rank 1 / count 4
building GPT model ...
 loading checkpoint from xxx at iteration xxx
 checkpoint version 3.0
  successfully loaded checkpoint from xxx at iteration xxx
loading rank 2 / count 4
building GPT model ...
 loading checkpoint from xxx at iteration xxx
mintsugaEHEH commented 1 year ago

I solve this problem by explicitly releasing memory with gc.collect() in checkpoint_saver_megatron.py