bigscience-workshop / Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Other
1.3k stars 211 forks source link

User Warnings for accessing grad attribute of non-leaf Tensors thrown with TP=1 and PP>1 #356

Open chelseajohn opened 1 year ago

chelseajohn commented 1 year ago

Problem

On pretaining GPT like models using this script , with _tensor_parallelsim(TP)=1 and pipeline_paralleism(PP)>1 for model of any size and batch size_ I get the following user warning multiple times

[default3]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default3]:  return self._grad
[default2]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default2]:  return self._grad
[default1]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default1]:  return self._grad
[default0]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default0]:  return self._grad
[default3]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default3]:  return self._grad
[default2]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default2]:  return self._grad

The training does not halt and the logs looks fine but the warning talks about the gradients not being back propagated is concerning. It seems that TP=1, creates non leaf tensors and TP>1 creates leaf tensors which is rather confusing to me. From here, the tensors that are result of an operation are not leaf tensors, but again why only for TP=1 ?

From my observations, the warning happens only in the beginning of the training and the number of times the error appears is equal to the number of models trained(Data Parallel,DP) times the number of pipeline passes. For example for a 6.7B parameter model:

  1. Nodes=4; GPUs=16;TP=1; PP=2 => DP = 8 ; number of pipleline pass = PP-1=1; error appears 8*1= 8 times
  2. Nodes=8; GPUSs=32;TP=1; PP=8 => DP=4; number of pipeline passes = PP-1=7 ; error appears 4*7=28 times

Also looking at the pipeline communication here; the tensors are communicated with required_grad=True flag.

System and Repo specifics:

Example Launch Command

>>Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 2 --num-layers 32 --hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 2 --global-batch-size 2048 --train-samples 69_335_938 --vocab-file vocab.json --merge-file merges.txt --loss-scale 12 --fp16 --seed 42 --checkpoint-activations --train-tokens 142_000_000_000 --optimizer adam --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-8 --lr 1.2e-4 --min-lr 1.2e-5 --lr-decay-style cosine --lr-decay-samples 126_953_125 --lr-warmup-samples 183_105 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 1 --save-interval 300 --eval-interval 300 --tensorboard-dir tensorboard --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints --data-path merged_german_only --split 949,50,1 --data-impl mmap --distributed-backend nccl --deepspeed --deepspeed_config ds_config.6416902.json --zero-stage 1 --deepspeed-activation-checkpointing
chelseajohn commented 1 year ago

Log File For the above Command

[default0]:using world size: 16, data-parallel-size: 8, tensor-model-parallel size: 1, pipeline-model-parallel size: 2 
[default0]:using torch.float16 for parameters ...
[default0]:------------------------ arguments ------------------------
[default0]:  abort_on_unmet_fused_kernel_constraints ......... False
[default0]:  accumulate_allreduce_grads_in_fp32 .............. False
[default0]:  adam_beta1 ...................................... 0.9
[default0]:  adam_beta2 ...................................... 0.95
[default0]:  adam_eps ........................................ 1e-08
[default0]:  adlr_autoresume ................................. False
[default0]:  adlr_autoresume_interval ........................ 1000
[default0]:  apply_query_key_layer_scaling ................... True
[default0]:  apply_residual_connection_post_layernorm ........ False
[default0]:  attention_dropout ............................... 0.1
[default0]:  attention_softmax_in_fp32 ....................... False
[default0]:  bert_binary_head ................................ True
[default0]:  bert_load ....................................... None
[default0]:  bf16 ............................................ False
[default0]:  bias_dropout_fusion ............................. True
[default0]:  bias_gelu_fusion ................................ True
[default0]:  biencoder_projection_dim ........................ 0
[default0]:  biencoder_shared_query_context_model ............ False
[default0]:  block_data_path ................................. None
[default0]:  checkpoint_activations .......................... True
[default0]:  checkpoint_in_cpu ............................... False
[default0]:  checkpoint_num_layers ........................... 1
[default0]:  clip_grad ....................................... 1.0
[default0]:  codecarbon_dir .................................. /p/scratch/opengptx-elm/john2/opengpt/bigscience/output_dir/6416902/codecarbon
[default0]:  consumed_train_samples .......................... 0
[default0]:  consumed_train_tokens ........................... 0
[default0]:  consumed_valid_samples .......................... 0
[default0]:  contigious_checkpointing ........................ False
[default0]:  cpu_optimizer ................................... False
[default0]:  cpu_torch_adam .................................. False
[default0]:  curriculum_learning ............................. False
[default0]:  data_impl ....................................... mmap
[default0]:  data_parallel_size .............................. 8
[default0]:  data_path ....................................... ['/p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only']
[default0]:  dataloader_type ................................. single
[default0]:  DDP_impl ........................................ local
[default0]:  decoder_seq_length .............................. None
[default0]:  deepscale ....................................... False
[default0]:  deepscale_config ................................ None
[default0]:  deepspeed ....................................... True
[default0]:  deepspeed_activation_checkpointing .............. True
[default0]:  deepspeed_config ................................ /p/scratch/opengptx-elm/john2/opengpt/bigscience/output_dir/6416902/ds_config.6416902.json
[default0]:  deepspeed_mpi ................................... False
[default0]:  distribute_checkpointed_activations ............. False
[default0]:  distributed_backend ............................. nccl
[default0]:  embed_layernorm ................................. False
[default0]:  embedding_path .................................. None
[default0]:  encoder_seq_length .............................. 2048
[default0]:  eod_mask_loss ................................... False
[default0]:  eval_interval ................................... 300
[default0]:  eval_iters ...................................... 100
[default0]:  eval_only ....................................... None
[default0]:  evidence_data_path .............................. None
[default0]:  exit_duration_in_mins ........................... None
[default0]:  exit_interval ................................... None
[default0]:  ffn_hidden_size ................................. 16384
[default0]:  finetune ........................................ False
[default0]:  fp16 ............................................ True
[default0]:  fp16_lm_cross_entropy ........................... False
[default0]:  fp32_residual_connection ........................ False
[default0]:  gigaflos_no_embeds .............................. 0
[default0]:  global_batch_size ............................... 2048
[default0]:  glu_activation .................................. None
[default0]:  hidden_dropout .................................. 0.1
[default0]:  hidden_size ..................................... 4096
[default0]:  hysteresis ...................................... 2
[default0]:  ict_head_size ................................... None
[default0]:  ict_load ........................................ None
[default0]:  img_dim ......................................... 224
[default0]:  indexer_batch_size .............................. 128
[default0]:  indexer_log_interval ............................ 1000
[default0]:  inference ....................................... False
[default0]:  init_method_std ................................. 0.02
[default0]:  init_method_xavier_uniform ...................... False
[default0]:  initial_loss_scale .............................. 4294967296
[default0]:  kill_switch_path ................................ None
[default0]:  kv_channels ..................................... 128
[default0]:  layernorm_epsilon ............................... 1e-05
[default0]:  lazy_mpu_init ................................... None
[default0]:  load ............................................ None
[default0]:  local_rank ...................................... None
[default0]:  log_batch_size_to_tensorboard ................... True
[default0]:  log_interval .................................... 1
[default0]:  log_learning_rate_to_tensorboard ................ True
[default0]:  log_level ....................................... None
[default0]:  log_level_replica ............................... None
[default0]:  log_loss_scale_to_tensorboard ................... True
[default0]:  log_num_zeros_in_grad ........................... False
[default0]:  log_params_norm ................................. False
[default0]:  log_path ........................................ None
[default0]:  log_timers_to_tensorboard ....................... True
[default0]:  log_validation_ppl_to_tensorboard ............... True
[default0]:  loss_on_targets_only ............................ False
[default0]:  loss_scale ...................................... 12.0
[default0]:  loss_scale_window ............................... 1000
[default0]:  lr .............................................. 0.00012
[default0]:  lr_decay_iters .................................. None
[default0]:  lr_decay_samples ................................ 126953125
[default0]:  lr_decay_style .................................. cosine
[default0]:  lr_decay_tokens ................................. None
[default0]:  lr_warmup_fraction .............................. None
[default0]:  lr_warmup_iters ................................. 0
[default0]:  lr_warmup_samples ............................... 183105
[default0]:  make_vocab_size_divisible_by .................... 128
[default0]:  mask_prob ....................................... 0.15
[default0]:  masked_softmax_fusion ........................... True
[default0]:  max_position_embeddings ......................... 2048
[default0]:  mean_noise_span_length .......................... None
[default0]:  memory_centric_tiled_linear ..................... False
[default0]:  merge_file ...................................... /p/scratch/opengptx-elm/data/tokenizers/opengpt_de/merges.txt
[default0]:  micro_batch_size ................................ 2
[default0]:  min_loss_scale .................................. 1.0
[default0]:  min_lr .......................................... 1.2e-05
[default0]:  mmap_warmup ..................................... False
[default0]:  no_load_optim ................................... None
[default0]:  no_load_rng ..................................... None
[default0]:  no_save_optim ................................... None
[default0]:  no_save_rng ..................................... None
[default0]:  noise_density ................................... None
[default0]:  num_attention_heads ............................. 32
[default0]:  num_channels .................................... 3
[default0]:  num_classes ..................................... 1000
[default0]:  num_layers ...................................... 32
[default0]:  num_layers_per_virtual_pipeline_stage ........... None
[default0]:  num_workers ..................................... 2
[default0]:  onnx_safe ....................................... None
[default0]:  openai_gelu ..................................... False
[default0]:  optimizer ....................................... adam
[default0]:  override_lr_scheduler ........................... False
[default0]:  pad_vocab_size_to ............................... None
[default0]:  params_dtype .................................... torch.float16
[default0]:  partition_activations ........................... False
[default0]:  patch_dim ....................................... 16
[default0]:  pipeline_model_parallel_size .................... 2
[default0]:  position_embedding_type ......................... PositionEmbeddingType.absolute
[default0]:  pp_partition_method ............................. None
[default0]:  profile_backward ................................ False
[default0]:  query_in_block_prob ............................. 0.1
[default0]:  rampup_batch_size ............................... None
[default0]:  rank ............................................ 0
[default0]:  remote_device ................................... none
[default0]:  reset_attention_mask ............................ False
[default0]:  reset_position_ids .............................. False
[default0]:  retriever_report_topk_accuracies ................ []
[default0]:  retriever_score_scaling ......................... False
[default0]:  retriever_seq_length ............................ 256
[default0]:  reweight_loss_based_on_position_frequency ....... False
[default0]:  sample_rate ..................................... 1.0
[default0]:  save ............................................ /p/scratch/opengptx-elm/john2/opengpt/bigscience/output_dir/6416902/checkpoints
[default0]:  save_interval ................................... 300
[default0]:  scatter_gather_tensors_in_pipeline .............. True
[default0]:  scattered_embeddings ............................ False
[default0]:  seed ............................................ 42
[default0]:  seq_length ...................................... 2048
[default0]:  sgd_momentum .................................... 0.9
[default0]:  short_seq_prob .................................. 0.1
[default0]:  skip_train_iteration_range ...................... None
[default0]:  split ........................................... 949,50,1
[default0]:  split_transformers .............................. False
[default0]:  sync_tp_duplicated_parameters ................... False
[default0]:  synchronize_each_layer .......................... False
[default0]:  tensor_model_parallel_size ...................... 1
[default0]:  tensorboard_dir ................................. /p/scratch/opengptx-elm/john2/opengpt/bigscience/output_dir/6416902/tensorboard
[default0]:  tensorboard_log_interval ........................ 1
[default0]:  tensorboard_queue_size .......................... 5
[default0]:  test_weighted_split_names ....................... None
[default0]:  test_weighted_split_paths ....................... None
[default0]:  test_weighted_split_paths_path .................. None
[default0]:  test_weighted_split_splits ...................... None
[default0]:  test_weighted_split_weights ..................... None
[default0]:  tile_factor ..................................... 1
[default0]:  titles_data_path ................................ None
[default0]:  tokenizer_name_or_path .......................... None
[default0]:  tokenizer_type .................................. GPT2BPETokenizer
[default0]:  train_iters ..................................... None
[default0]:  train_samples ................................... 69335938
[default0]:  train_tokens .................................... 142000000000
[default0]:  train_weighted_split_paths ...................... None
[default0]:  train_weighted_split_paths_path ................. None
[default0]:  use_bnb_optimizer ............................... False
[default0]:  use_checkpoint_lr_scheduler ..................... False
[default0]:  use_contiguous_buffers_in_ddp ................... False
[default0]:  use_cpu_initialization .......................... None
[default0]:  use_one_sent_docs ............................... False
[default0]:  use_pin_memory .................................. False
[default0]:  valid_num_workers ............................... 2
[default0]:  valid_weighted_split_names ...................... None
[default0]:  valid_weighted_split_paths ...................... None
[default0]:  valid_weighted_split_paths_path ................. None
[default0]:  valid_weighted_split_splits ..................... None
[default0]:  valid_weighted_split_weights .................... None
[default0]:  virtual_pipeline_model_parallel_size ............ None
[default0]:  vocab_extra_ids ................................. 0
[default0]:  vocab_file ...................................... /p/scratch/opengptx-elm/data/tokenizers/opengpt_de/vocab.json
[default0]:  weight_decay .................................... 0.1
[default0]:  world_size ...................................... 16
[default0]:  zero_allgather_bucket_size ...................... 0.0
[default0]:  zero_contigious_gradients ....................... False
[default0]:  zero_reduce_bucket_size ......................... 0.0
[default0]:  zero_reduce_scatter ............................. False
[default0]:  zero_stage ...................................... 1
[default0]:-------------------- end of arguments ---------------------
[default0]:setting number of micro-batches to constant 128
[default0]:> building GPT2BPETokenizer tokenizer ...
[default0]: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304)
[default0]:DeepSpeed general environment info:
[default0]:torch install path ............... ['/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch']
[default0]:torch version .................... 1.11
[default0]:torch cuda version ............... 11.5
[default0]:torch hip version ................ None
[default0]:nvcc version ..................... 11.5
[default0]:deepspeed install path ........... ['/p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/DeepSpeed/deepspeed']
[default0]:deepspeed info ................... 0.7.1+0f5c201, 0f5c201, HEAD
[default0]:deepspeed wheel compiled w. ...... torch 1.12, cuda 11.6
[default0]:**** Git info for Megatron: git_hash=b88c61f git_branch=HEAD ****
[default0]:> initializing torch distributed ...
[default0]:[2022-12-08 16:33:11,280] [INFO] [comm.py:631:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[default0]:> initializing tensor model parallel with size 1
[default0]:> initializing pipeline model parallel with size 2
[default0]:[W socket.cpp:558] [c10d] The client socket cannot be initialized to connect to [jwb0906i.juwels]:34765 (errno: 97 - Address family not supported by protocol).
[default3]:> setting tensorboard ...
[default3]:[W socket.cpp:558] [c10d] The client socket cannot be initialized to connect to [jwb0906i.juwels]:34765 (errno: 97 - Address family not supported by protocol).
[default3]:[W socket.cpp:558] [c10d] The client socket cannot be initialized to connect to [jwb0906i.juwels]:34765 (errno: 97 - Address family not supported by protocol).
[default0]:> setting random seeds to 42 ...
[default0]:[2022-12-08 16:33:12,588] [INFO] [checkpointing.py:226:model_parallel_cuda_manual_seed] > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 2760 and data parallel seed: 42
[default0]:> compiling dataset index builder ...
[default0]:make: Entering directory '/p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/data'
[default0]:make: Nothing to be done for 'default'.
[default0]:make: Leaving directory '/p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/data'
[default0]:>>> done with dataset index builder. Compilation time: 0.381 seconds
[default0]:> compiling and loading fused kernels ...
[default0]:Detected CUDA files, patching ldflags
[default0]:Emitting ninja build file /tmp/tmp6lj4nlk7/build.ninja...
[default0]:Building extension module scaled_upper_triang_masked_softmax_cuda...
[default0]:Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[default0]:[1/3] g++ -MMD -MF scaled_upper_triang_masked_softmax.o.d -DTORCH_EXTENSION_NAME=scaled_upper_triang_masked_softmax_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/TH -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/THC -isystem /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/include -isystem /p/software/juwelsbooster/stages/2022/software/Python/3.9.6-GCCcore-11.2.0/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++14 -O3 -c /p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -o scaled_upper_triang_masked_softmax.o 
[default0]:[2/3] /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/bin/nvcc  -DTORCH_EXTENSION_NAME=scaled_upper_triang_masked_softmax_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/TH -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/THC -isystem /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/include -isystem /p/software/juwelsbooster/stages/2022/software/Python/3.9.6-GCCcore-11.2.0/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda -std=c++14 -c /p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -o scaled_upper_triang_masked_softmax_cuda.cuda.o 
[default0]:[3/3] g++ scaled_upper_triang_masked_softmax.o scaled_upper_triang_masked_softmax_cuda.cuda.o -shared -L/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/p/software/juwelsbooster/stages/2022/software/CUDA/11.5/lib64 -lcudart -o scaled_upper_triang_masked_softmax_cuda.so
[default0]:Loading extension module scaled_upper_triang_masked_softmax_cuda...
[default0]:Detected CUDA files, patching ldflags
[default0]:Emitting ninja build file /tmp/tmp6lj4nlk7/build.ninja...
[default0]:Building extension module scaled_masked_softmax_cuda...
[default0]:Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[default0]:[1/3] g++ -MMD -MF scaled_masked_softmax.o.d -DTORCH_EXTENSION_NAME=scaled_masked_softmax_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/TH -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/THC -isystem /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/include -isystem /p/software/juwelsbooster/stages/2022/software/Python/3.9.6-GCCcore-11.2.0/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++14 -O3 -c /p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -o scaled_masked_softmax.o 
[default0]:[2/3] /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/bin/nvcc  -DTORCH_EXTENSION_NAME=scaled_masked_softmax_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/TH -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/THC -isystem /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/include -isystem /p/software/juwelsbooster/stages/2022/software/Python/3.9.6-GCCcore-11.2.0/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda -std=c++14 -c /p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -o scaled_masked_softmax_cuda.cuda.o 
[default0]:[3/3] g++ scaled_masked_softmax.o scaled_masked_softmax_cuda.cuda.o -shared -L/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/p/software/juwelsbooster/stages/2022/software/CUDA/11.5/lib64 -lcudart -o scaled_masked_softmax_cuda.so
[default0]:Loading extension module scaled_masked_softmax_cuda...
[default0]:Detected CUDA files, patching ldflags
[default0]:Emitting ninja build file /tmp/tmp6lj4nlk7/build.ninja...
[default0]:Building extension module fused_mix_prec_layer_norm_cuda...
[default0]:Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[default0]:[1/3] g++ -MMD -MF layer_norm_cuda.o.d -DTORCH_EXTENSION_NAME=fused_mix_prec_layer_norm_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/TH -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/THC -isystem /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/include -isystem /p/software/juwelsbooster/stages/2022/software/Python/3.9.6-GCCcore-11.2.0/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++14 -O3 -c /p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -o layer_norm_cuda.o 
[default0]:[2/3] /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/bin/nvcc  -DTORCH_EXTENSION_NAME=fused_mix_prec_layer_norm_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/TH -isystem /p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/include/THC -isystem /p/software/juwelsbooster/stages/2022/software/CUDA/11.5/include -isystem /p/software/juwelsbooster/stages/2022/software/Python/3.9.6-GCCcore-11.2.0/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -maxrregcount=50 -std=c++14 -c /p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -o layer_norm_cuda_kernel.cuda.o 
[default0]:[3/3] g++ layer_norm_cuda.o layer_norm_cuda_kernel.cuda.o -shared -L/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/p/software/juwelsbooster/stages/2022/software/CUDA/11.5/lib64 -lcudart -o fused_mix_prec_layer_norm_cuda.so
[default0]:Loading extension module fused_mix_prec_layer_norm_cuda...
[default0]:NCCL version 2.12.7+cuda11.5
[default0]:>>> done with compiling and loading fused kernels. Compilation time: 370.434 seconds
[default0]:time to initialize megatron (seconds): 427.405
[default0]:[after megatron is initialized] datetime: 2022-12-08 16:39:23 
[default0]:building GPT model ...
[default0]:[2022-12-08 16:39:23,472] [INFO] [utils.py:827:see_memory_usage] Before Building Model
[default0]:[2022-12-08 16:39:23,472] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB         Max_MA 0.0 GB         CA 0.0 GB         Max_CA 0 GB 
[default0]:[2022-12-08 16:39:23,473] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory:  used = 36.63 GB, percent = 7.3%
[default0]:SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None
[default0]:Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=1, data=0, model=0): 8, ProcessCoord(pipe=1, data=1, model=0): 9, ProcessCoord(pipe=1, data=2, model=0): 10, ProcessCoord(pipe=1, data=3, model=0): 11, ProcessCoord(pipe=1, data=4, model=0): 12, ProcessCoord(pipe=1, data=5, model=0): 13, ProcessCoord(pipe=1, data=6, model=0): 14, ProcessCoord(pipe=1, data=7, model=0): 15}
[default0]:[2022-12-08 16:39:23,812] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer
[default0]:stage=0 layers=19
[default0]:     0: _to_float16
[default0]:     1: EmbeddingPipe
[default0]:     2: <lambda>
[default0]:     3: ParallelTransformerLayerPipe
[default0]:     4: ParallelTransformerLayerPipe
[default0]:     5: ParallelTransformerLayerPipe
[default0]:     6: ParallelTransformerLayerPipe
[default0]:     7: ParallelTransformerLayerPipe
[default0]:     8: ParallelTransformerLayerPipe
[default0]:     9: ParallelTransformerLayerPipe
[default0]:    10: ParallelTransformerLayerPipe
[default0]:    11: ParallelTransformerLayerPipe
[default0]:    12: ParallelTransformerLayerPipe
[default0]:    13: ParallelTransformerLayerPipe
[default0]:    14: ParallelTransformerLayerPipe
[default0]:    15: ParallelTransformerLayerPipe
[default0]:    16: ParallelTransformerLayerPipe
[default0]:    17: ParallelTransformerLayerPipe
[default0]:    18: ParallelTransformerLayerPipe
[default0]:stage=1 layers=20
[default0]:    19: ParallelTransformerLayerPipe
[default0]:    20: ParallelTransformerLayerPipe
[default0]:    21: ParallelTransformerLayerPipe
[default0]:    22: ParallelTransformerLayerPipe
[default0]:    23: ParallelTransformerLayerPipe
[default0]:    24: ParallelTransformerLayerPipe
[default0]:    25: ParallelTransformerLayerPipe
[default0]:    26: ParallelTransformerLayerPipe
[default0]:    27: ParallelTransformerLayerPipe
[default0]:    28: ParallelTransformerLayerPipe
[default0]:    29: ParallelTransformerLayerPipe
[default0]:    30: ParallelTransformerLayerPipe
[default0]:    31: ParallelTransformerLayerPipe
[default0]:    32: ParallelTransformerLayerPipe
[default0]:    33: ParallelTransformerLayerPipe
[default0]:    34: ParallelTransformerLayerPipe
[default0]:    35: undo
[default0]:    36: MixedFusedLayerNorm
[default0]:    37: EmbeddingPipe
[default0]:    38: float16_to_fp32
[default0]:  loss: CrossEntropy
default0]:[2022-12-08 16:39:24,420] [INFO] [utils.py:827:see_memory_usage] After Building Model
[default0]:[2022-12-08 16:39:24,421] [INFO] [utils.py:828:see_memory_usage] MA 6.42 GB         Max_MA 6.42 GB         CA 6.42 GB         Max_CA 6 GB 
[default0]:[2022-12-08 16:39:24,421] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory:  used = 36.75 GB, percent = 7.3%
[default0]:setting training iterations to 33855
[default0]:> learning rate decay style: cosine
[default0]:DeepSpeed is enabled.
[default0]:[2022-12-08 16:39:24,423] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.1+0f5c201, git-hash=0f5c201, git-branch=HEAD
[default0]:[2022-12-08 16:39:25,414] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[default0]:[2022-12-08 16:39:25,415] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer
[default0]:[2022-12-08 16:39:25,415] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[default0]:[2022-12-08 16:39:25,421] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = {basic_optimizer.__class__.__name__}
[default0]:[2022-12-08 16:39:25,421] [INFO] [utils.py:52:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'apex.optimizers.fused_adam.FusedAdam'>
[default0]:[2022-12-08 16:39:25,421] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 1 optimizer
[default0]:[2022-12-08 16:39:25,421] [INFO] [stage_1_and_2.py:134:__init__] Reduce bucket size 500000000
[default0]:[2022-12-08 16:39:25,421] [INFO] [stage_1_and_2.py:135:__init__] Allgather bucket size 500000000
[default0]:[2022-12-08 16:39:25,421] [INFO] [stage_1_and_2.py:136:__init__] CPU Offload: False
[default0]:[2022-12-08 16:39:25,421] [INFO] [stage_1_and_2.py:137:__init__] Round robin gradient partitioning: False
[default3]:Rank: 7 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default0]:Rank: 8 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)] 
[default1]:Rank: 13 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)] 
[default3]:Rank: 15 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)] 
[default2]:Rank: 2 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default2]:Rank: 10 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)] 
[default0]:Rank: 0 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default2]:Rank: 6 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default1]:Rank: 9 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)] 
[default1]:Rank: 1 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default3]:Rank: 11 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)] 
[default1]:Rank: 5 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default0]:Rank: 4 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default3]:Rank: 3 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (106496, False)] 
[default0]:Rank: 12 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)] 
[default2]:Rank: 14 partition count [8, 8, 8] and sizes[(219742208, False), (209715200, False), (107520, False)]
[default0]:[2022-12-08 16:39:29,810] [INFO] [utils.py:827:see_memory_usage] After initializing ZeRO optimizer
[default0]:[2022-12-08 16:39:29,811] [INFO] [utils.py:828:see_memory_usage] MA 11.2 GB         Max_MA 11.2 GB         CA 19.65 GB         Max_CA 20 GB 
[default0]:[2022-12-08 16:39:29,811] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory:  used = 36.98 GB, percent = 7.3%
[default0]:[2022-12-08 16:39:29,811] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam
[default0]:[2022-12-08 16:39:29,811] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler
[default0]:[2022-12-08 16:39:29,811] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = <megatron.learning_rates.AnnealingLR object at 0x14df5c03fca0>
[default0]:[2022-12-08 16:39:29,811] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.95), (0.9, 0.95), (0.9, 0.95)]
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:975:print] DeepSpeedEngine configuration:
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   activation_checkpointing_config  {
[default0]:    "partition_activations": false, 
[default0]:    "contiguous_memory_optimization": false, 
[default0]:    "cpu_checkpointing": false, 
[default0]:    "number_checkpoints": null, 
[default0]:    "synchronize_checkpoint_boundary": false, 
[default0]:    "profile": false
[default0]:}
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   amp_enabled .................. False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   amp_params ................... False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   autotuning_config ............ {
[default0]:    "enabled": false, 
[default0]:    "start_step": null, 
[default0]:    "end_step": null, 
[default0]:    "metric_path": null, 
[default0]:    "arg_mappings": null, 
[default0]:    "metric": "throughput", 
[default0]:    "model_info": null, 
[default0]:    "results_dir": null, 
[default0]:    "exps_dir": null, 
[default0]:    "overwrite": true, 
[default0]:    "fast": true, 
[default0]:    "start_profile_step": 3, 
[default0]:    "end_profile_step": 5, 
[default0]:    "tuner_type": "gridsearch", 
[default0]:    "tuner_early_stopping": 5, 
[default0]:    "tuner_num_trials": 50, 
[default0]:    "model_info_path": null, 
[default0]:    "mp_size": 1, 
[default0]:    "max_train_batch_size": null, 
[default0]:    "min_train_batch_size": 1, 
[default0]:    "max_train_micro_batch_size_per_gpu": 1.024000e+03, 
[default0]:    "min_train_micro_batch_size_per_gpu": 1, 
[default0]:    "num_tuning_micro_batch_sizes": 3
[default0]:}
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   bfloat16_enabled ............. False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   checkpoint_tag_validation_enabled  True
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   checkpoint_tag_validation_fail  False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x14df5c03fa90>
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   communication_data_type ...... None
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   curriculum_enabled ........... False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   curriculum_params ............ False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   dataloader_drop_last ......... False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   disable_allgather ............ False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   dump_state ................... False
[default0]:[2022-12-08 16:39:29,812] [INFO] [config.py:979:print]   dynamic_loss_scale_args ...... {'init_scale': 4096, 'scale_window': 500, 'delayed_shift': 2, 'min_scale': 1}
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_enabled ........... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_gas_boundary_resolution  1
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_layer_name ........ bert.encoder.layer
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_layer_num ......... 0
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_max_iter .......... 100
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_stability ......... 1e-06
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_tol ............... 0.01
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   eigenvalue_verbose ........... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   elasticity_enabled ........... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   flops_profiler_config ........ {
[default0]:    "enabled": false, 
[default0]:    "profile_step": 1, 
[default0]:    "module_depth": -1, 
[default0]:    "top_modules": 1, 
[default0]:    "detailed": true, 
[default0]:    "output_file": null
[default0]:}
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   fp16_auto_cast ............... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   fp16_enabled ................. True
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   fp16_master_weights_and_gradients  False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   global_rank .................. 0
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   gradient_accumulation_steps .. 128
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   gradient_clipping ............ 1.0
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   gradient_predivide_factor .... 1.0
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   initial_dynamic_scale ........ 4096
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   load_universal_checkpoint .... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   loss_scale ................... 0
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   memory_breakdown ............. False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   monitor_config ............... <deepspeed.monitor.config.DeepSpeedMonitorConfig object at 0x14df5c03f9d0>
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   nebula_config ................ {
[default0]:    "enabled": false, 
[default0]:    "persistent_storage_path": null, 
[default0]:    "persistent_time_interval": 100, 
[default0]:    "num_of_version_in_retention": 2, 
[default0]:    "enable_nebula_load": true, 
[default0]:    "load_path": null
[default0]:}
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   optimizer_legacy_fusion ...... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   optimizer_name ............... None
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   optimizer_params ............. None
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   pld_enabled .................. False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   pld_params ................... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   prescale_gradients ........... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   scheduler_name ............... None
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   scheduler_params ............. None
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   sparse_attention ............. None
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   sparse_gradients_enabled ..... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   steps_per_print .............. 2000
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   train_batch_size ............. 2048
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   train_micro_batch_size_per_gpu  2
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   wall_clock_breakdown ......... False
[default0]:[2022-12-08 16:39:29,813] [INFO] [config.py:979:print]   world_size ................... 8
[default0]:[2022-12-08 16:39:29,814] [INFO] [config.py:979:print]   zero_allow_untested_optimizer  False
[default0]:[2022-12-08 16:39:29,814] [INFO] [config.py:979:print]   zero_config .................. stage=1 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False
[default0]:[2022-12-08 16:39:29,814] [INFO] [config.py:979:print]   zero_enabled ................. True
[default0]:[2022-12-08 16:39:29,814] [INFO] [config.py:979:print]   zero_optimization_stage ...... 1
[default0]:[2022-12-08 16:39:29,814] [INFO] [config.py:981:print]   json = {
[default0]:    "train_micro_batch_size_per_gpu": 2, 
[default0]:    "train_batch_size": 2.048000e+03, 
[default0]:    "gradient_clipping": 1.0, 
[default0]:    "zero_optimization": {
[default0]:        "stage": 1
[default0]:    }, 
[default0]:    "fp16": {
[default0]:        "enabled": true, 
[default0]:        "loss_scale": 0, 
[default0]:        "loss_scale_window": 500, 
[default0]:        "hysteresis": 2, 
[default0]:        "min_loss_scale": 1, 
[default0]:        "initial_scale_power": 12
[default0]:    }, 
[default0]:    "steps_per_print": 2.000000e+03, 
[default0]:    "wall_clock_breakdown": false, 
[default0]:    "compression_training": {
[default0]:        "weight_quantization": {
[default0]:            "shared_parameters": {
[default0]:            }, 
[default0]:            "different_groups": {
[default0]:            }
[default0]:        }, 
[default0]:        "activation_quantization": {
[default0]:            "shared_parameters": {
[default0]:            }, 
[default0]:            "different_groups": {
[default0]:            }
[default0]:        }, 
[default0]:        "sparse_pruning": {
[default0]:            "shared_parameters": {
[default0]:            }, 
[default0]:            "different_groups": {
[default0]:            }
[default0]:        }, 
[default0]:        "row_pruning": {
[default0]:            "shared_parameters": {
[default0]:            }, 
[default0]:            "different_groups": {
[default0]:            }
[default0]:        }, 
[default0]:        "head_pruning": {
[default0]:            "shared_parameters": {
[default0]:            }, 
[default0]:            "different_groups": {
[default0]:            }
[default0]:        }, 
[default0]:        "channel_pruning": {
[default0]:            "shared_parameters": {
[default0]:            }, 
[default0]:            "different_groups": {
[default0]:            }
[default0]:        }
[default0]:    }
[default0]:}
[default0]:[2022-12-08 16:39:29,814] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=128 micro_batch_size=2
[default0]:estimated model parameters: 6.873022464
[default0]:estimated model parameters without embeddings: 6.44415488
[default0]:[after model, optimizer, and learning rate scheduler are built] datetime: 2022-12-08 16:39:30 
[default0]:> building train, validation, and test datasets ...
[default0]: > datasets target sizes (minimum size):
[default0]:    train:      69335938
[default0]:    validation: 23142400
[default0]:    test:       204800
[default0]:> building train, validation, and test datasets for GPT ...
[default0]: > building dataset index ...
[default0]:/p/project/opengptx-elm/john2/opengpt/bigscience/Megatron-DeepSpeed/megatron/utils.py:349: UserWarning: Parameter count with the embeddings will be inaccurate with PP > 1, as the first and last stage hold several copies of the embeddings
[default0]:  warnings.warn("Parameter count with the embeddings will be inaccurate with PP > 1, as the first and last stage hold several copies of the embeddings")
[default0]:    reading sizes...
[default0]:    reading pointers...
[default0]:    reading document index...
[default0]:    creating numpy buffer of mmap...
[default0]:    creating memory view of numpy buffer...
[default0]: > finished creating indexed dataset in 0.081394 seconds
[default0]:    number of documents: 401170763
[default0]: > dataset split:
[default0]:    train:
[default0]:     document indices in [0, 380711054) total of 380711054 documents
[default0]:    validation:
[default0]:     document indices in [380711054, 400769592) total of 20058538 documents
[default0]:    test:
[default0]:     document indices in [400769592, 401170763) total of 401171 documents
[default0]: > loading sample-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_train_indexmap_69335938ns_2048sl_42s_sample_idx.npy
[default0]: > loading shuffle-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_train_indexmap_69335938ns_2048sl_42s_shuffle_idx.npy
[default0]:    loaded indexed file in 0.192 seconds
[default0]:    total number of samples: 92537371
[default0]:    total number of epochs: 2
[default0]: > loading doc-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_valid_indexmap_23142400ns_2048sl_42s_doc_idx.npy
[default0]: > loading sample-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_valid_indexmap_23142400ns_2048sl_42s_sample_idx.npy
[default0]: > loading shuffle-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_valid_indexmap_23142400ns_2048sl_42s_shuffle_idx.npy
[default0]:    loaded indexed file in 0.192 seconds
[default0]:    total number of samples: 23983517
[default0]:    total number of epochs: 10
[default0]: > WARNING: could not find index map files, building the indices on rank 0 ...
[default0]: > last epoch number of samples (5124) is smaller than 95.0% of number of samples per epoch (16639), setting separate_last_epoch to True
[default0]: > elasped time to build and save doc-idx mapping (seconds): 0.275444
[default0]:    using:
[default0]:     number of documents:       401171
[default0]:     number of epochs:          13
[default0]:     sequence length:           2048
[default0]:     total number of samples:   216315
[default0]: > elasped time to build and save sample-idx mapping (seconds): 0.034729
[default0]: > building shuffle index with split [0, 199676) and [199676, 216315) ...
[default0]: > elasped time to build and save shuffle-idx mapping (seconds): 0.007354
[default0]: > loading doc-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_test_indexmap_204800ns_2048sl_42s_doc_idx.npy
[default0]: > loading sample-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_test_indexmap_204800ns_2048sl_42s_sample_idx.npy
[default0]: > loading shuffle-idx mapping from /p/scratch/opengptx-elm/data/preprocessed_datasets/20221110_german_only/merged_german_only_test_indexmap_204800ns_2048sl_42s_shuffle_idx.npy
[default0]:    loaded indexed file in 0.011 seconds
[default0]:    total number of samples: 216316
[default0]:    total number of epochs: 13
[default0]:> finished creating GPT datasets ...
[default3]:time (ms) | model-and-optimizer-setup: 6958.78 | train/valid/test-data-iterators-setup: 59458.82
[default0]:[after dataloaders are built] datetime: 2022-12-08 16:40:30 
[default0]:done with setup ...
[default0]:training ...
[default0]:Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings:
[default0]:[000-000] 6.8730B / 6.4442B
[default0]:[000-001] 6.8730B / 6.4442B
[default0]:[before the start of training step] datetime: 2022-12-08 16:40:30 
[default0]:[2022-12-08 16:40:30,400] [INFO] [checkpointing.py:547:forward] Activation Checkpointing Information
[default0]:[2022-12-08 16:40:30,400] [INFO] [checkpointing.py:548:forward] ----Partition Activations False, CPU CHECKPOINTING False
[default0]:[2022-12-08 16:40:30,400] [INFO] [checkpointing.py:551:forward] ----contiguous Memory Checkpointing False with 32 total layers
[default0]:[2022-12-08 16:40:30,400] [INFO] [checkpointing.py:554:forward] ----Synchronization False
[default0]:[2022-12-08 16:40:30,400] [INFO] [checkpointing.py:555:forward] ----Profiling time in checkpointing False
[default0]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default0]:  return self._grad
[default1]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default1]:  return self._grad
[default3]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default3]:  return self._grad
[default2]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default2]:  return self._grad
[default1]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default1]:  return self._grad
[default2]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default2]:  return self._grad
[default3]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default3]:  return self._grad
[default0]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
[default0]:  return self._grad
[default3]: iteration        1/   33855 | consumed samples:         2048 | consumed tokens:      4194304 | elapsed time per iteration (s): 108.55 | learning rate: 1.342E-06 | global batch size:  2048 | lm loss: 1.166878E+01 | loss scale: 4096.0 | grad norm: 48.698 | num zeros: 0.0 | number of skipped iterations:   0 | number of nan iterations:   0 | samples per second: 18.867 | TFLOPs: 137.82 |
[default0]:[Rank 0] (after 1 iterations) memory (MB) | allocated: 11515.90966796875 | max allocated: 21286.72119140625 | reserved: 24154.0 | max reserved: 24154.0
[default0]:[Rank 8] (after 1 iterations) memory (MB) | allocated: 13203.89013671875 | max allocated: 22350.59375 | reserved: 25768.0 | max reserved: 25768.0
[default3]: iteration        2/   33855 | consumed samples:         4096 | consumed tokens:      8388608 | elapsed time per iteration (s): 127.22 | learning rate: 2.684E-06 | global batch size:  2048 | lm loss: 1.166918E+01 | loss scale: 4096.0 | grad norm: 48.571 | num zeros: 0.0 | number of skipped iterations:   0 | number of nan iterations:   0 | samples per second: 16.098 | TFLOPs: 117.59 |
[default3]: iteration        3/   33855 | consumed samples:         6144 | consumed tokens:     12582912 | elapsed time per iteration (s): 97.12 | learning rate: 4.027E-06 | global batch size:  2048 | lm loss: 1.052248E+01 | loss scale: 4096.0 | grad norm: 81.478 | num zeros: 0.0 | number of skipped iterations:   0 | number of nan iterations:   0 | samples per second: 21.087 | TFLOPs: 154.04 |
[default3]: iteration        4/   33855 | consumed samples:         8192 | consumed tokens:     16777216 | elapsed time per iteration (s): 97.49 | learning rate: 5.369E-06 | global batch size:  2048 | lm loss: 1.400736E+01 | loss scale: 4096.0 | grad norm: 465.311 | num zeros: 0.0 | number of skipped iterations:   0 | number of nan iterations:   0 | samples per second: 21.008 | TFLOPs: 153.47 |
[default3]: iteration        5/   33855 | consumed samples:        10240 | consumed tokens:     20971520 | elapsed time per iteration (s): 104.16 | learning rate: 6.711E-06 | global batch size:  2048 | lm loss: 1.291126E+01 | loss scale: 4096.0 | grad norm: 188.286 | num zeros: 0.0 | number of skipped iterations:   0 | number of nan iterations:   0 | samples per second: 19.662 | TFLOPs: 143.63 |
[default3]: iteration        6/   33855 | consumed samples:        12288 | consumed tokens:     25165824 | elapsed time per iteration (s): 96.32 | learning rate: 8.053E-06 | global batch size:  2048 | lm loss: 1.242363E+01 | loss scale: 4096.0 | grad norm: 132.996 | num zeros: 0.0 | number of skipped iterations:   0 | number of nan iterations:   0 | samples per second: 21.263 | TFLOPs: 155.33 |

It would be much apprecitaed if anyone could explain why this warning occurs or give some suggestions to debug this. Please also let me know if you need clarifications regarding the issue. Thank you !

janEbert commented 1 year ago

Now with a stack trace:

[default0]:Traceback (most recent call last):
[default0]:  File "/Megatron-DeepSpeed/pretrain_gpt.py", line 244, in <module>
[default0]:    main()
[default0]:  File "/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
[default0]:    return f(*args, **kwargs)
[default0]:  File "/Megatron-DeepSpeed/pretrain_gpt.py", line 234, in main
[default0]:    pretrain(
[default0]:  File "/Megatron-DeepSpeed/megatron/training.py", line 195, in pretrain
[default0]:    iteration = train(forward_step_func,
[default0]:  File "/Megatron-DeepSpeed/megatron/training.py", line 874, in train
[default0]:    train_step(forward_step_func,
[default0]:  File "/Megatron-DeepSpeed/megatron/training.py", line 449, in train_step
[default0]:    loss = model[0].train_batch(data_iter=data_iterator)
[default0]:  File "/Megatron-DeepSpeed/DeepSpeed/deepspeed/runtime/pipe/engine.py", line 345, in train_batch
[default0]:    self._exec_schedule(sched)
[default0]:  File "/Megatron-DeepSpeed/DeepSpeed/deepspeed/runtime/pipe/engine.py", line 1375, in _exec_schedule
[default0]:    self._exec_instr(**cmd.kwargs)
[default0]:  File "/Megatron-DeepSpeed/DeepSpeed/deepspeed/runtime/pipe/engine.py", line 655, in _exec_forward_pass
[default0]:    self._zero_grads(inputs)
[default0]:  File "/Megatron-DeepSpeed/DeepSpeed/deepspeed/runtime/pipe/engine.py", line 1198, in _zero_grads
[default0]:    if inputs.grad is not None:
[default0]:  File "/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py", line 1104, in grad
[default0]:    return self._grad
[default0]:UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
janEbert commented 1 year ago

Seems like the original code base has the same problem: https://github.com/NVIDIA/Megatron-LM/blob/52e636888cccc41e931251c417a7181fc36de926/megatron/optimizer/distrib_optimizer.py#L485-L486