PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.25k stars 5.59k forks source link

enable_mp_skip_c_identity在pp graident merge + recompute场景下报错 #59290

Open BeingGod opened 11 months ago

BeingGod commented 11 months ago

bug描述 Describe the Bug

PaddleNLP(develop) commit id: d181e352e547440490bf66a13fdcee9d2eb5a94e Paddle(develop)commit id: 9b36e53f24ac5f471b20de99e0cc3980f38b44ab

报错结果: image

复现脚本:

set -x
SCRIPT_HOME=$(cd $(dirname $0); pwd)

CARDS="0,1,2,3"
export NCCL_SHM_DISABLE=1

task_name="perf"
rm -rf "$SCRIPT_HOME/output/$task_name/"
rm -rf "$SCRIPT_HOME/output/${task_name}_log"

TP=2
PP=2
SHARDING_STAGE="stage1"

if [ $PERF ] && [ $PERF -eq "1" ]; then
    export CUDA_LAUNCH_BLOCKING="1"
    export PROFILER_OPTIONS="batch_range=[1, 2]; profile_path=./profiler/${task_name}; record_shapes=True; timer_only=False"

    rm -rf $SCRIPT_HOME/profiler/${task_name}
fi

python -u  -m paddle.distributed.launch \
    --devices=$CARDS \
    --log_dir "output/$task_name""_log" \
    run_pretrain.py \
    --model_type "llama" \
    --model_name_or_path "facebook/llama-7b" \
    --tokenizer_name_or_path "facebook/llama-7b" \
    --input_dir "./data" \
    --output_dir "output/$task_name" \
    --split 949,50,1 \
    --max_seq_length 1024 \
    --per_device_train_batch_size 2 \
    --per_device_eval_batch_size 2 \
    --fuse_attention_qkv 0 \
    --fuse_attention_ffn 0 \
    --use_flash_attention 0 \
    --use_fused_rms_norm 1 \
    --use_fused_rope 0 \
    --fp16 \
    --fp16_opt_level "O2" \
    --scale_loss 1024 \
    --amp_master_grad 0 \
    --max_grad_norm 1.0 \
    --pipeline_parallel_config "enable_delay_scale_loss" \
    --tensor_parallel_config "enable_mp_async_allreduce enable_mp_skip_c_identity" \
    --tensor_parallel_degree $TP \
    --pipeline_parallel_degree $PP \
    --sharding $SHARDING_STAGE \
    --virtual_pp_degree 1 \
    --learning_rate 5.0e-5 \
    --min_learning_rate 5.0e-7 \
    --max_steps 5 \
    --save_steps 1000 \
    --weight_decay 0.01 \
    --adam_beta1 0.9 \
    --adam_beta2 0.95 \
    --warmup_ratio 0.1 \
    --logging_steps 1 \
    --dataloader_num_workers 0 \
    --gradient_accumulation_steps 4 \
    --eval_steps 1000 \
    --report_to "visualdl" \
    --disable_tqdm true \
    --continue_training 0 \
    --recompute 1 \
    --do_train \
    --device "gpu" \
    --overwrite_output_dir True

其他补充信息 Additional Supplementary Information

分析: 当ColumnParallelLinear的matmul算子调用之前没有其他算子调用时会触发该bug。

w5688414 commented 11 months ago

请问有最小复现代码没?