Open TonyTangYu opened 4 years ago
Hi @TonyTangYu,
Have you also changed the script, run_squad_baseline.sh? Can you also send me the log of your run so that we make sure all the parameters are set properly? Btw, which checkpoint you are using here? Thanks.
Best regards, Reza
@RezaYazdaniAminabadi , Thanks for your quick response.
Yes, I have also changed the script, run_squad_baseline.sh. I delete the argument --model_file
because I don't have the DeepSpeed Bert pretraining model. So, I don't use ant checkpoint here.
Here is my run_squad_baseline.sh. When I run the run_squad_baseline.sh
, there is no such warning and I could get the training results.
NGPU_PER_NODE=2 MODEL_FILE=my_path_to_modelfile SQUAD_DIR=my_path_to_squad OUTPUT_DIR=my_path_to_output NUM_NODES=1 NGPU=$((NGPU_PER_NODE*NUM_NODES)) EFFECTIVE_BATCH_SIZE=24 MAX_GPU_BATCH_SIZE=3 PER_GPU_BATCH_SIZE=$((EFFECTIVE_BATCH_SIZE/NGPU)) if [[ $PER_GPU_BATCH_SIZE -lt $MAX_GPU_BATCH_SIZE ]]; then GRAD_ACCUM_STEPS=1 else GRAD_ACCUM_STEPS=$((PER_GPU_BATCH_SIZE/MAX_GPU_BATCH_SIZE)) fi LR=3e-5 MASTER_PORT=$((NGPU+12345)) JOBNAME="baseline${NGPU}GPUs_${EFFECTIVE_BATCH_SIZE}batch_size" run_cmd="deepspeed --num_nodes ${NUM_NODES} --num_gpus ${NGPU_PER_NODE} \ nvidia_run_squad_baseline.py \ --bert_model bert-large-uncased \ --do_train \ --do_lower_case \ --do_predict \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --train_batch_size $PER_GPU_BATCH_SIZE \ --learning_rate ${LR} \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir $OUTPUT_DIR \ --job_name ${JOB_NAME} \ --gradient_accumulation_steps ${GRAD_ACCUM_STEPS} \ --fp16 " echo ${run_cmd} eval ${run_cmd}
The deepspeed log is as follows:
deepspeed --num_nodes 1 --num_gpus 2 --master_port=29500 --hostfile /dev/null nvidia_run_squad_deepspeed.py --bert_model bert-large-uncased --do_train --do_lower_case --predict_batch_size 3 --do_predict --train_file /home/tangyu/hdd/dataset/SQuAD/dataset/train-v1.1.json --predict_file /home/tangyu/hdd/dataset/SQuAD/dataset/dev-v1.1.json --train_batch_size 6 --learning_rate 0.00003 --num_train_epochs 10.0 --max_seq_length 384 --doc_stride 128 --output_dir /home/tangyu/Desktop/project/arm-beauty/DeepSpeed/DeepSpeedExamples/BingBertSquad/output/deepspeed_bsz12 --job_name deepspeed_2GPUs_12batch_size --gradient_accumulation_steps 2 --fp16 --deepspeed --deepspeed_config deepspeed_bsz24_config.json --deepspeed_transformer_kernel --dropout 0.1 --seed 12345 --preln [2020-08-21 10:07:23,922] [WARNING] [deepspeed_run.py:90:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. [2020-08-21 10:07:23,944] [INFO] [deepspeed_run.py:333:main] cmd=['/home/tangyu/anaconda3/envs/deepspeed/bin/python', '-u', '-m', 'deepspeed.pt.deepspeed_launch', '--world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19', '--master_addr=127.0.0.1', '--master_port=29500', 'nvidia_run_squad_deepspeed.py', '--bert_model', 'bert-large-uncased', '--do_train', '--do_lower_case', '--predict_batch_size', '3', '--do_predict', '--train_file', '/home/tangyu/hdd/dataset/SQuAD/dataset/train-v1.1.json', '--predict_file', '/home/tangyu/hdd/dataset/SQuAD/dataset/dev-v1.1.json', '--train_batch_size', '6', '--learning_rate', '0.00003', '--num_train_epochs', '10.0', '--max_seq_length', '384', '--doc_stride', '128', '--output_dir', '/home/tangyu/Desktop/project/arm-beauty/DeepSpeed/DeepSpeedExamples/BingBertSquad/output/deepspeed_bsz12', '--job_name', 'deepspeed_2GPUs_12batch_size', '--gradient_accumulation_steps', '2', '--fp16', '--deepspeed', '--deepspeed_config', 'deepspeed_bsz24_config.json', '--deepspeed_transformer_kernel', '--dropout', '0.1', '--seed', '12345', '--preln'] [2020-08-21 10:07:24,370] [INFO] [deepspeed_launch.py:71:main] WORLD INFO DICT: {'localhost': [0, 1]} [2020-08-21 10:07:24,370] [INFO] [deepspeed_launch.py:80:main] nnodes=1, num_local_procs=2, node_rank=0 [2020-08-21 10:07:24,370] [INFO] [deepspeed_launch.py:92:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]}) [2020-08-21 10:07:24,370] [INFO] [deepspeed_launch.py:93:main] dist_world_size=2 [2020-08-21 10:07:24,370] [INFO] [deepspeed_launch.py:96:main] Setting CUDA_VISIBLE_DEVICES=0,1 08/21/2020 10:07:25 - INFO - main - device: cuda:1 n_gpu: 1, distributed training: True, 16-bits training: True 08/21/2020 10:07:25 - INFO - main - device: cuda:0 n_gpu: 1, distributed training: True, 16-bits training: True 08/21/2020 10:07:27 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt from cache at /home/tangyu/.pytorch_pretrained_bert/9b3c03a36e83b13d5ba95ac965c9f9074a99e14340c523ab405703179e79fc46.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 08/21/2020 10:07:27 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt from cache at /home/tangyu/.pytorch_pretrained_bert/9b3c03a36e83b13d5ba95ac965c9f9074a99e14340c523ab405703179e79fc46.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 [2020-08-21 10:07:32,677] [INFO] [deepspeed_config.py:411:_set_batch_related_parameters] After Train batch 12 micro_batch 3 and grad_acc 2 DeepSpeed Transformer config is {'layer_id': 0, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} [2020-08-21 10:07:32,873] [INFO] [deepspeed_config.py:411:_set_batch_related_parameters] After Train batch 12 micro_batch 3 and grad_acc 2 DeepSpeed Transformer config is {'layer_id': 0, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #0 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 1, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #1 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 2, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #2 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 3, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #3 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 4, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #0 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 1, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #4 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 5, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #1 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 2, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #5 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 6, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #6 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 7, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #2 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 3, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #7 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 8, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #3 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 4, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #8 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 9, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #4 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 5, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #5 is created with date type [half]. layer #9 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 10, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} DeepSpeed Transformer config is {'layer_id': 6, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #10 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 11, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #6 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 7, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #11 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 12, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #7 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 8, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #12 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 13, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #8 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 9, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #13 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 14, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #9 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 10, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #14 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 15, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #10 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 11, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #15 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 16, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #11 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 12, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #16 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 17, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #12 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 13, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #17 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 18, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #13 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 14, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #18 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 19, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #14 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 15, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #19 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 20, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #15 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 16, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #20 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 21, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #16 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 17, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #21 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 22, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #17 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 18, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #22 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 23, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #18 is created with date type [half]. layer #23 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 19, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #19 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 20, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #20 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 21, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} 08/21/2020 10:07:37 - INFO - turing.nvidia_modelingpreln - Init BERT pretrain model layer #21 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 22, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} layer #22 is created with date type [half]. DeepSpeed Transformer config is {'layer_id': 23, 'batch_size': 3, 'hidden_size': 1024, 'max_seq_length': 384, 'heads': 16, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 12345, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False} VOCAB SIZE: 30528 [2020-08-21 10:07:37,398] [INFO] [init.py:90:initialize] DeepSpeed info: version=0.2.0, git-hash=96c4daa, git-branch=HEAD [2020-08-21 10:07:37,399] [INFO] [deepspeed_config.py:411:_set_batch_related_parameters] After Train batch 12 micro_batch 3 and grad_acc 2 [2020-08-21 10:07:37,399] [INFO] [deepspeed_light.py:403:_init_distributed] Set device to local rank 0 within node. layer #23 is created with date type [half]. 08/21/2020 10:07:37 - INFO - turing.nvidia_modelingpreln - Init BERT pretrain model VOCAB SIZE: 30528 [2020-08-21 10:07:37,949] [INFO] [init.py:90:initialize] DeepSpeed info: version=0.2.0, git-hash=96c4daa, git-branch=HEAD [2020-08-21 10:07:37,950] [INFO] [deepspeed_config.py:411:_set_batch_related_parameters] After Train batch 12 micro_batch 3 and grad_acc 2 [2020-08-21 10:07:37,950] [INFO] [deepspeed_light.py:403:_init_distributed] Set device to local rank 1 within node. [2020-08-21 10:07:37,985] [INFO] [deepspeed_light.py:74:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2 [2020-08-21 10:07:38,497] [INFO] [deepspeed_light.py:74:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2 [2020-08-21 10:07:38,825] [INFO] [deepspeed_light.py:484:_configure_optimizer] Using DeepSpeed Optimizer param name adam as basic optimizer [2020-08-21 10:07:38,825] [INFO] [deepspeed_light.py:484:_configure_optimizer] Using DeepSpeed Optimizer param name adam as basic optimizer [2020-08-21 10:07:38,826] [INFO] [deepspeed_light.py:486:_configure_optimizer] DeepSpeed Basic Optimizer = FusedAdam ( Parameter Group 0 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.01
Parameter Group 1 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.0 )[2020-08-21 10:07:38,826] [INFO] [deepspeed_light.py:486:_configure_optimizer] DeepSpeed Basic Optimizer = FusedAdam ( Parameter Group 0 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.01
Parameter Group 1 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.0 )
[2020-08-21 10:07:38,826] [INFO] [deepspeed_light.py:560:_configure_zero_optimizer] Creating fp16 ZeRO stage 1 optimizer [2020-08-21 10:07:38,826] [INFO] [deepspeed_light.py:560:_configure_zero_optimizer] Creating fp16 ZeRO stage 1 optimizer [2020-08-21 10:07:38,826] [INFO] [deepspeed_light.py:564:_configure_zero_optimizer] Creating fp16 ZeRO Optimizer Stage 1 [2020-08-21 10:07:38,826] [INFO] [deepspeed_light.py:564:_configure_zero_optimizer] Creating fp16 ZeRO Optimizer Stage 1 [2020-08-21 10:07:38,826] [INFO] [zero_optimizer_stage1.py:168:init] max_elements_per_comm=500000000 [2020-08-21 10:07:38,826] [INFO] [zero_optimizer_stage1.py:168:init] max_elements_per_comm=500000000 [2020-08-21 10:07:38,826] [INFO] [log_utils.py:60:log_dist] [Rank 0] Total number of elements in model: 334098432, max elements per com: 500000000 [2020-08-21 10:07:38,826] [INFO] [log_utils.py:60:log_dist] [Rank 0] num_comm_intervals=1, partition_remaining=0 [2020-08-21 10:07:38,826] [INFO] [zero_optimizer_stage1.py:90:flatten_dense_tensors_sub_partition_aligned] Number of Elements (w. padding) is 334098432 [2020-08-21 10:07:38,846] [INFO] [zero_optimizer_stage1.py:318:get_data_parallel_sub_partitions] partition info: [2020-08-21 10:07:38,847] [INFO] [zero_optimizer_stage1.py:319:get_data_parallel_sub_partitions] total_num_elements=334098432 [2020-08-21 10:07:38,847] [INFO] [zero_optimizer_stage1.py:320:get_data_parallel_sub_partitions] world_size=2 [2020-08-21 10:07:38,847] [INFO] [zero_optimizer_stage1.py:321:get_data_parallel_sub_partitions] max_elements_per_comm=334098432 [2020-08-21 10:07:38,847] [INFO] [zero_optimizer_stage1.py:322:get_data_parallel_sub_partitions] sub_partition_size=167049216 [2020-08-21 10:07:38,847] [INFO] [zero_optimizer_stage1.py:323:get_data_parallel_sub_partitions] num_sub_partitions=2 [2020-08-21 10:07:38,847] [INFO] [zero_optimizer_stage1.py:324:get_data_parallel_sub_partitions] num_comm_intervals=1 [2020-08-21 10:07:38,847] [INFO] [zero_optimizer_stage1.py:325:get_data_parallel_sub_partitions] [2020-08-21 10:07:38,848] [INFO] [deepspeed_light.py:596:_configure_zero_optimizer] Creating fp16 zero stage 1 optimizer [2020-08-21 10:07:38,848] [WARNING] [deepspeed_light.py:359:_configure_lr_scheduler] DeepSpeed using client LR scheduler [2020-08-21 10:07:38,848] [INFO] [deepspeed_light.py:361:_configure_lr_scheduler] DeepSpeed LR Scheduler = None [2020-08-21 10:07:38,848] [INFO] [deepspeed_light.py:912:_report_progress] rank:1 step=0, skipped=0, lr=[3e-05, 3e-05], mom=[(0.9, 0.999), (0.9, 0.999)] 08/21/2020 10:07:38 - INFO - main - propagate deepspeed-config settings to client settings [2020-08-21 10:07:38,850] [INFO] [log_utils.py:60:log_dist] [Rank 0] Total number of elements in model: 4098, max elements per com: 500000000 [2020-08-21 10:07:38,850] [INFO] [log_utils.py:60:log_dist] [Rank 0] num_comm_intervals=1, partition_remaining=0 [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:90:flatten_dense_tensors_sub_partition_aligned] Number of Elements (w. padding) is 4098 [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:318:get_data_parallel_sub_partitions] partition info: [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:319:get_data_parallel_sub_partitions] total_num_elements=4098 [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:320:get_data_parallel_sub_partitions] world_size=2 [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:321:get_data_parallel_sub_partitions] max_elements_per_comm=4098 [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:322:get_data_parallel_sub_partitions] sub_partition_size=2049 [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:323:get_data_parallel_sub_partitions] num_sub_partitions=2 [2020-08-21 10:07:38,850] [INFO] [zero_optimizer_stage1.py:324:get_data_parallel_sub_partitions] num_comm_intervals=1 [2020-08-21 10:07:38,851] [INFO] [zero_optimizer_stage1.py:325:get_data_parallel_sub_partitions] [2020-08-21 10:07:38,851] [INFO] [deepspeed_light.py:596:_configure_zero_optimizer] Creating fp16 zero stage 1 optimizer [2020-08-21 10:07:38,851] [WARNING] [deepspeed_light.py:359:_configure_lr_scheduler] DeepSpeed using client LR scheduler [2020-08-21 10:07:38,851] [INFO] [deepspeed_light.py:361:_configure_lr_scheduler] DeepSpeed LR Scheduler = None [2020-08-21 10:07:38,851] [INFO] [deepspeed_light.py:912:_report_progress] rank:0 step=0, skipped=0, lr=[3e-05, 3e-05], mom=[(0.9, 0.999), (0.9, 0.999)] [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:424:print] DeepSpeedLight configuration: [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] activation_checkpointing_config <deepspeed.pt.deepspeed_checkpointing_config.DeepSpeedActivationCheckpointingConfig object at 0x7f2500e84668> [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] allgather_size ............... 500000000 [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] allreduce_always_fp32 ........ False [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] disable_allgather ............ False [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] dump_state ................... False [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] dynamic_loss_scale_args ...... None [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] fp16_enabled ................. True [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] global_rank .................. 0 [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] gradient_accumulation_steps .. 2 [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] gradient_clipping ............ 1.0 [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] gradient_predivide_factor .... 1.0 [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] initial_dynamic_scale ........ 4294967296 [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] loss_scale ................... 0 [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] memory_breakdown ............. False [2020-08-21 10:07:38,851] [INFO] [deepspeed_config.py:428:print] optimizer_legacy_fusion ...... False [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] optimizer_name ............... adam [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] optimizer_params ............. {'lr': 3e-05, 'weight_decay': 0.0, 'bias_correction': False} [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] prescale_gradients ........... False [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] scheduler_name ............... None [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] scheduler_params ............. None [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] sparse_gradients_enabled ..... False [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] steps_per_print .............. 10 [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] tensorboard_enabled .......... False [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] tensorboard_job_name ......... DeepSpeedJobName [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] tensorboard_output_path ...... [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] train_batch_size ............. 12 [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] train_micro_batch_size_per_gpu 3 [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] wall_clock_breakdown ......... False [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] world_size ................... 2 [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] zero_allow_untested_optimizer False [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] zero_config .................. <deepspeed.pt.deepspeed_zero_config.DeepSpeedZeroConfig object at 0x7f2500e84ba8> [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] zero_enabled ................. True [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:428:print] zero_optimization_stage ...... 1 [2020-08-21 10:07:38,852] [INFO] [deepspeed_config.py:435:print] json = { "fp16":{ "enabled":true }, "gradient_clipping":1.0, "optimizer":{ "params":{ "bias_correction":false, "lr":3e-05, "weight_decay":0.0 }, "type":"Adam" }, "steps_per_print":10, "train_batch_size":12, "train_micro_batch_size_per_gpu":3, "zero_optimization":{ "stage":1 } } 08/21/2020 10:07:38 - INFO - main - propagate deepspeed-config settings to client settings 08/21/2020 10:07:45 - INFO - main - Running training 08/21/2020 10:07:45 - INFO - main - Num orig examples = 87599 08/21/2020 10:07:45 - INFO - main - Num split examples = 87970 08/21/2020 10:07:45 - INFO - main - Batch size = 3 08/21/2020 10:07:45 - INFO - main - Num steps = 145998 08/21/2020 10:07:46 - INFO - main - Running training 08/21/2020 10:07:46 - INFO - main - Num orig examples = 87599 08/21/2020 10:07:46 - INFO - main - Num split examples = 87970 08/21/2020 10:07:46 - INFO - main - Batch size = 3 08/21/2020 10:07:46 - INFO - main - Num steps = 145998 Epoch: 0%| | 0/10 [00:00<?, ?it/s[2020-08-21 10:07:48,874] [INFO] [deepspeed_utils.py:118:_handle_overflow] rank 0 detected overflow nan in tensor 0:0 shape torch.Size([30528, 1024]) | 0/14662 [00:00<?, ?it/s] [2020-08-21 10:07:48,907] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0 | 1/14662 [00:00<3:06:59, 1.31it/s] [2020-08-21 10:07:48,907] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0 [2020-08-21 10:07:49,629] [INFO] [deepspeed_utils.py:118:_handle_overflow] rank 0 detected overflow nan in tensor 0:0 shape torch.Size([30528, 1024]) | 3/14662 [00:01<2:12:35, 1.84it/s] [2020-08-21 10:07:49,630] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 2147483648.0, reducing to 1073741824.0 [2020-08-21 10:07:49,630] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 2147483648.0, reducing to 1073741824.0 Warning: NaN or Inf found in input tensor. | 4/14662 [00:02<2:21:27, 1.73it/s] Warning: NaN or Inf found in input tensor. [2020-08-21 10:07:50,359] [INFO] [deepspeed_utils.py:118:_handle_overflow] rank 0 detected overflow nan in tensor 0:0 shape torch.Size([30528, 1024]) | 5/14662 [00:02<1:55:02, 2.12it/s] [2020-08-21 10:07:50,359] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 1073741824.0, reducing to 536870912.0 [2020-08-21 10:07:50,359] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 1073741824.0, reducing to 536870912.0 Warning: NaN or Inf found in input tensor. | 6/14662 [00:03<2:03:59, 1.97it/s] Warning: NaN or Inf found in input tensor. [2020-08-21 10:07:51,066] [INFO] [deepspeed_utils.py:118:_handle_overflow] rank 0 detected overflow nan in tensor 0:0 shape torch.Size([30528, 1024]) | 7/14662 [00:03<1:47:30, 2.27it/s] [2020-08-21 10:07:51,067] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 536870912.0, reducing to 268435456.0 [2020-08-21 10:07:51,067] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 536870912.0, reducing to 268435456.0 Warning: NaN or Inf found in input tensor. | 8/14662 [00:03<1:54:35, 2.13it/s] Warning: NaN or Inf found in input tensor.
As for the run_squad_deepspeed.sh, I only delete the argument --model_file
and the other arguments remain.
I tried to run the examples on the dataset wikipedia and bookcorpus. However, the official deepspeed website says that the Downloading and pre-processing instructions of these datasets are coming soon. So I haven't run the DeepSpeed/BERT Pre-training examples. It would be better if you could offer me some pre-processing instructions of these two datasets.
Thanks! Looking forward to your reply!
Best wishes, Tony
@RezaYazdaniAminabadi , it might be a problem of mixed precision training. When I set fp16 false, the training process goes well. I wonder whether there is a method to avoid this issue.
Looking forward to your reply.
Thanks.
Hi @TonyTangYu,
If I understand correctly from your last comment, you were saying that you run BingBertSquad without importing any model, is it true? If that's the case, your model is initialized randomly and it should run fine both with fp16 and fp32. I have tried both of them and I only get very low accuracy results (between 7-12 F1 score)! You may want to try it with a pretrained model. Also, can you please pass me the full log of your training? From what you have shared, it just shows the beginning of the training process. Thanks.
Best regards, Reza
Hi@RezaYazdaniAminabadi , thanks for your quick reply! You are right. I run BingBertSquad without importing any model. Is there any problems that the model is initialized randomly? In my case, running with only fp32 seems to work. But running with both fp16 and fp32 does not. I wonder why and how to solve it.
out.txt This file is the output of my training process. I trained 10 epochs. But because of the limit of the uploading file, I only display the information within one epoch. However, from epoch 2 to epoch 10, the information is a repeat of the following info:
[INFO] [deepspeed_utils.py:118:_handle_overflow] rank 0 detected overflow nan in tensor 0:0 shape torch.Size([30528, 1024]) [2020-08-26 21:30:04,116] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 1, reducing to 1 Warning: NaN or Inf found in input tensor. Warning: NaN or Inf found in input tensor. [2020-08-26 21:30:04,117] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 1, reducing to 1
I found that this might result from mixed precision training. I trained bert base with fp16=false and there were no such errors.
You mentioned I could also try it with a pretrained model. But the official web doesn't provide an instruction of the preprocessing of the wikipedia and the bookscorpus datasets. Could you please give one instruction so that I could give it a try? Thank you very much!
Looking forward to your reply!
Best wishes, Tony
Hi, @RezaYazdaniAminabadi , I print the loss and it gives me the following information:
tensor(6.8750, device='cuda:1', dtype=torch.float16, grad_fn=\
) | 0/14662 [00:00<?, ?it/s] tensor(6.6328, device='cuda:1', dtype=torch.float16, grad_fn=\ ) | 1/14662 [00:00<2:17:18, 1.78it/s] tensor(6.8867, device='cuda:0', dtype=torch.float16, grad_fn=\ ) tensor(6.5078, device='cuda:0', dtype=torch.float16, grad_fn=\ ) [2020-08-29 15:51:29,296] [INFO] [deepspeed_utils.py:118:_handle_overflow] rank 0 detected overflow nan in tensor 0:0 shape torch.Size([30528, 768]) [2020-08-29 15:51:29,311] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 2, reducing to 2 [2020-08-29 15:51:29,311] [INFO] [zero_optimizer_stage1.py:621:step] [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 2, reducing to 2 tensor(nan, device='cuda:0', dtype=torch.float16, grad_fn=\ ) | 2/14662 [00:00<1:36:16, 2.54it/s] tensor(nan, device='cuda:1', dtype=torch.float16, grad_fn=\ ) tensor(nan, device='cuda:1', dtype=torch.float16, grad_fn=\ ) tensor(nan, device='cuda:0', dtype=torch.float16, grad_fn=\ )
I'm thinking is there any chance that this warning results from the zero division?
Looking forward to your reply!
Best wishes, Tony
Hi Tony,
Thanks for following up on this. From your log, I realize that you have turned on the zero optimizer. I wonder if there is any reason for this. Could it be possible that you try the BingBertSquad one more time with disabling the zero optimizer? I will also try this on my side to see if I can reproduce the same issue. Thanks.
Best regards, Reza
HI, @RezaYazdaniAminabadi .
Thanks for your reply and your efforts. The reason why I turned on the zero optimizer is that I wanted to know how DeepSpeed works and tried to explore the effects of the zero optimizer. Are there any problems? Should the zero optimizer be turned on under certain circumstances? If so, is there any tutorial explaining those conditions? When should I turn on the zero optimizer and when should not?
If I disabled the zero optimizer, it says
09/01/2020 08:57:21 - INFO - turing.nvidia_modelingpreln - Init BERT pretrain model 09/01/2020 08:57:21 - INFO - turing.nvidia_modelingpreln - Init BERT pretrain model VOCAB SIZE: 30528 [2020-09-01 08:57:21,369] [INFO] [init.py:90:initialize] DeepSpeed info: version=0.2.0, git-hash=96c4daa, git-branch=HEAD [2020-09-01 08:57:21,369] [INFO] [deepspeed_config.py:411:_set_batch_related_parameters] After Train batch 6 micro_batch 3 and grad_acc 1 [2020-09-01 08:57:21,369] [INFO] [deepspeed_light.py:403:_init_distributed] Set device to local rank 0 within node. VOCAB SIZE: 30528 [2020-09-01 08:57:21,581] [INFO] [init.py:90:initialize] DeepSpeed info: version=0.2.0, git-hash=96c4daa, git-branch=HEAD [2020-09-01 08:57:21,581] [INFO] [deepspeed_config.py:411:_set_batch_related_parameters] After Train batch 6 micro_batch 3 and grad_acc 1 [2020-09-01 08:57:21,581] [INFO] [deepspeed_light.py:403:_init_distributed] Set device to local rank 1 within node. [2020-09-01 08:57:21,749] [INFO] [deepspeed_light.py:74:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2 [2020-09-01 08:57:22,028] [INFO] [deepspeed_light.py:74:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2 [2020-09-01 08:57:22,348] [INFO] [deepspeed_light.py:484:_configure_optimizer] Using DeepSpeed Optimizer param name adam as basic optimizer [2020-09-01 08:57:22,348] [INFO] [deepspeed_light.py:484:_configure_optimizer] Using DeepSpeed Optimizer param name adam as basic optimizer [2020-09-01 08:57:22,348] [INFO] [deepspeed_light.py:486:_configure_optimizer] DeepSpeed Basic Optimizer = FusedAdam ( Parameter Group 0 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.01
Parameter Group 1 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.0 ) [2020-09-01 08:57:22,348] [INFO] [deepspeed_light.py:486:_configure_optimizer] DeepSpeed Basic Optimizer = FusedAdam ( Parameter Group 0 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.01
Parameter Group 1 betas: (0.9, 0.999) bias_correction: False eps: 1e-08 lr: 3e-05 weight_decay: 0.0 ) [2020-09-01 08:57:22,348] [INFO] [deepspeed_light.py:526:_configure_fp16_optimizer] Creating fp16 optimizer with dynamic loss scale [2020-09-01 08:57:22,348] [INFO] [deepspeed_light.py:526:_configure_fp16_optimizer] Creating fp16 optimizer with dynamic loss scale [2020-09-01 08:57:22,365] [WARNING] [deepspeed_light.py:359:_configure_lr_scheduler] DeepSpeed using client LR scheduler [2020-09-01 08:57:22,365] [INFO] [deepspeed_light.py:361:_configure_lr_scheduler] DeepSpeed LR Scheduler = None [2020-09-01 08:57:22,365] [INFO] [deepspeed_light.py:912:_report_progress] rank:1 step=0, skipped=0, lr=[3e-05, 3e-05], mom=[(0.9, 0.999), (0.9, 0.999)] 09/01/2020 08:57:22 - INFO - main - propagate deepspeed-config settings to client settings [2020-09-01 08:57:22,366] [WARNING] [deepspeed_light.py:359:_configure_lr_scheduler] DeepSpeed using client LR scheduler [2020-09-01 08:57:22,366] [INFO] [deepspeed_light.py:361:_configure_lr_scheduler] DeepSpeed LR Scheduler = None [2020-09-01 08:57:22,366] [INFO] [deepspeed_light.py:912:_report_progress] rank:0 step=0, skipped=0, lr=[3e-05, 3e-05], mom=[(0.9, 0.999), (0.9, 0.999)] [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:424:print] DeepSpeedLight configuration: [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] activation_checkpointing_config <deepspeed.pt.deepspeed_checkpointing_config.DeepSpeedActivationCheckpointingConfig object at 0x7f92b8a86be0> [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] allgather_size ............... 500000000 [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] allreduce_always_fp32 ........ False [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] disable_allgather ............ False [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] dump_state ................... False [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] dynamic_loss_scale_args ...... None [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] fp16_enabled ................. True [2020-09-01 08:57:22,366] [INFO] [deepspeed_config.py:428:print] global_rank .................. 0 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] gradient_accumulation_steps .. 1 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] gradient_clipping ............ 1.0 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] gradient_predivide_factor .... 1.0 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] initial_dynamic_scale ........ 4294967296 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] loss_scale ................... 0 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] memory_breakdown ............. False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] optimizer_legacy_fusion ...... False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] optimizer_name ............... adam [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] optimizer_params ............. {'lr': 3e-05, 'weight_decay': 0.0, 'bias_correction': False} [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] prescale_gradients ........... False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] scheduler_name ............... None [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] scheduler_params ............. None [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] sparse_gradients_enabled ..... False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] steps_per_print .............. 1 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] tensorboard_enabled .......... False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] tensorboard_job_name ......... DeepSpeedJobName [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] tensorboard_output_path ...... [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] train_batch_size ............. 6 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] train_micro_batch_size_per_gpu 3 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] wall_clock_breakdown ......... False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] world_size ................... 2 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] zero_allow_untested_optimizer False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] zero_config .................. <deepspeed.pt.deepspeed_zero_config.DeepSpeedZeroConfig object at 0x7f92b8beb160> [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] zero_enabled ................. False [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:428:print] zero_optimization_stage ...... 0 [2020-09-01 08:57:22,367] [INFO] [deepspeed_config.py:435:print] json = { "fp16":{ "enabled":true }, "gradient_clipping":1.0, "optimizer":{ "params":{ "bias_correction":false, "lr":3e-05, "weight_decay":0.0 }, "type":"Adam" }, "steps_per_print":1, "train_batch_size":6, "train_micro_batch_size_per_gpu":3 } 09/01/2020 08:57:22 - INFO - main - propagate deepspeed-config settings to client settings 09/01/2020 08:57:28 - INFO - main - Running training 09/01/2020 08:57:28 - INFO - main - Num orig examples = 87599 09/01/2020 08:57:28 - INFO - main - Num split examples = 87970 09/01/2020 08:57:28 - INFO - main - Batch size = 3 09/01/2020 08:57:28 - INFO - main - Num steps = 291996 09/01/2020 08:57:29 - INFO - main - Running training 09/01/2020 08:57:29 - INFO - main - Num orig examples = 87599 09/01/2020 08:57:29 - INFO - main - Num split examples = 87970 09/01/2020 08:57:29 - INFO - main - Batch size = 3 09/01/2020 08:57:29 - INFO - main - Num steps = 291996 Epoch: 0%| | 0/10 [00:00<?, ?it/s[2020-09-01 08:57:31,249] [INFO] [fp16_optimizer.py:291:_update_scale] | 0/14662 [00:00<?, ?it/s] Grad overflow on iteration 0 [2020-09-01 08:57:31,249] [INFO] [fp16_optimizer.py:293:_update_scale] Reducing dynamic loss scale from 4294967296 to 2147483648.0 [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0 [2020-09-01 08:57:31,250] [INFO] [fp16_optimizer.py:291:_update_scale] Grad overflow on iteration 0 [2020-09-01 08:57:31,250] [INFO] [fp16_optimizer.py:293:_update_scale] Reducing dynamic loss scale from 4294967296 to 2147483648.0 [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0 09/01/2020 08:57:31 - INFO - main - bert_squad_progress: step=1 lr=0.0 loss=4.70625 | 1/14662 [00:01<4:50:25, 1.19s/it] [2020-09-01 08:57:31,821] [INFO] [fp16_optimizer.py:291:_update_scale] | 1/14662 [00:01<4:06:11, 1.01s/it] Grad overflow on iteration 1 [2020-09-01 08:57:31,821] [INFO] [fp16_optimizer.py:293:_update_scale] Reducing dynamic loss scale from 2147483648.0 to 1073741824.0 [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 2147483648.0, reducing to 1073741824.0 [2020-09-01 08:57:31,824] [INFO] [fp16_optimizer.py:291:_update_scale] | 2/14662 [00:01<3:35:02, 1.14it/s] Grad overflow on iteration 1 [2020-09-01 08:57:31,824] [INFO] [fp16_optimizer.py:293:_update_scale] Reducing dynamic loss scale from 2147483648.0 to 1073741824.0 [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 2147483648.0, reducing to 1073741824.0 Warning: NaN or Inf found in input tensor. Warning: NaN or Inf found in input tensor. 09/01/2020 08:57:31 - INFO - main - bert_squad_progress: step=2 lr=2.0548226688036824e-09 loss=nan [2020-09-01 08:57:32,417] [INFO] [fp16_optimizer.py:291:_update_scale] | 2/14662 [00:01<3:13:12, 1.26it/s] Grad overflow on iteration 2 [2020-09-01 08:57:32,417] [INFO] [fp16_optimizer.py:293:_update_scale] Reducing dynamic loss scale from 1073741824.0 to 536870912.0 [deepspeed] OVERFLOW! Skipping step. Attempted loss scale: 1073741824.0, reducing to 536870912.0 Iteration: 0%| | 2/14662 [00:02<4:28:11, 1.10s/it] Epoch: 0%| | 0/10 [00:02<?, ?it/s] Traceback (most recent call last): File "nvidia_run_squad_deepspeed.py", line 1135, in
main() File "nvidia_run_squad_deepspeed.py", line 1018, in main model.step() File "/home/tangyu/anaconda3/envs/deepspeed/lib/python3.6/site-packages/deepspeed/pt/deepspeed_light.py", line 812, in step self.optimizer.step() File "/home/tangyu/anaconda3/envs/deepspeed/lib/python3.6/site-packages/deepspeed/pt/fp16_optimizer.py", line 204, in step if p.grad is None else p.grad.to(data_type) for p in group File "/home/tangyu/anaconda3/envs/deepspeed/lib/python3.6/site-packages/torch/_utils.py", line 229, in _flatten_dense_tensors flat = torch.cat([t.contiguous().view(-1) for t in tensors], dim=0) RuntimeError: CUDA out of memory. Tried to allocate 1.25 GiB (GPU 0; 10.92 GiB total capacity; 6.67 GiB already allocated; 683.19 MiB free; 8.13 GiB reserved in total by PyTorch)
I know the reason of cuda OOM, a result of relatively small GPU memory because I am trying to train bert-large on 1080TI. But it seems that there still exists the overflow problem. I wonder why and how to solve it.
I am looking forward to your reply! Thanks a lot!
Best wishes, Tony
Any updates on this issue? I am also trying to train large transformers with DeepspeedTransformerLayers with zero optimizer, but I also get Warning: NaN or Inf found in input tensor.
warning.
Hi @shuuki4
Could you please provide me a log/script of your training so that I try to repro the same issue, because previously I wasn't able to repro this issue on my side?
Thanks. Reza
Hi Deepspeed team,
I run DeepSpeedExamples/BingBertSquad on my machine with 2 GPUs. I follow the instruction https://www.deepspeed.ai/tutorials/bert-finetuning/ and can get a reproduce when I run the
run_squad_baseline.sh
.However, when I changed the
deepspeed_bsz24_config.json
file, it gave me the following warning and I could only get 'loss=nan'. Besides, if I used the original config file, it gave me the same result.The config file is like this:
Could you help me fix it? Thanks!
Tony