PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
11.99k stars 2.92k forks source link

[Bug]: PaddleNLP 3.0 Llama2 模型 显存占用变多 #8756

Closed AndSonder closed 2 months ago

AndSonder commented 2 months ago

软件环境

- paddlepaddle: develop
- paddlepaddle-gpu: develop
- paddlenlp: develop

重复问题

错误描述

同样的命令:paddlenlp 3.0 跑 Llama2 的显存占用变多

ppnlp 6月25日的版本: image

ppnlp 7月11日的版本 image

稳定复现步骤 & 代码

set -x
unset CUDA_VISIBLE_DEVICES

task_name="llama_auto_dp2mp2pp2"
rm -rf output/$task_name/
rm -rf "output/$task_name""_log"
rm -rf auto_3d/

export GLOG_v=0
# export PARALLEL_CROSS_ENTROPY=true
export FLAGS_call_stack_level=2
export PYTHONPATH=../../../:$PYTHONPATH
# export FLAGS_log_memory_stats=true
# export NCCL_BUFFSIZE=20971520

# export FLAGS_call_stack_level=3
# export FLAGS_use_cuda_managed_memory=true

# export FLAGS_embedding_deterministic=1        
# export FLAGS_cudnn_deterministic=1

export NCCL_BUFFSIZE=120971520

export FLAGS_benchmark=1
# export NCCL_DEBUG=INFO 
# export NCCL_DEBUG_SUBSYS=All

export CUDA_DEVICE_MAX_CONNECTIONS=0

to_static=1  # 是否开启动转静训练

python -u  -m paddle.distributed.launch \
    --gpus "0,1,2,3" \
    --log_dir "auto_3d" \
    run_pretrain_auto.py \
    --model_type "llama" \
    --model_name_or_path "facebook/llama-7b" \
    --tokenizer_name_or_path "facebook/llama-7b" \
    --input_dir "../../data" \
    --output_dir "output/$task_name" \
    --split 949,50,1 \
    --max_seq_length 2048 \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 2 \
    --gradient_accumulation_steps 8 \
    --use_flash_attention 0 \
    --use_fused_rms_norm 0 \
    --fp16 0 \
    --fp16_opt_level "O2"  \
    --scale_loss 1024 \
    --pipeline_parallel_degree 4 \
    --virtual_pp_degree 2 \
    --pipeline_schedule_mode "VPP" \
    --tensor_parallel_degree 1 \
    --sharding_parallel_degree 1 \
    --sharding "stage1" \
    --learning_rate 0.01 \
    --min_learning_rate 0.00001 \
    --max_steps 20000 \
    --save_steps 5000000 \
    --weight_decay 0.01 \
    --warmup_ratio 0.01 \
    --logging_steps 1\
    --dataloader_num_workers 1 \
    --sharding "" \
    --eval_steps 1000000 \
    --disable_tqdm true \
    --continue_training 0 \
    --recompute 1 \
    --do_train 1 \
    --do_eval \
    --device "gpu" \
    --data_impl "mmap" \
    --enable_auto_parallel 1 \
    --max_grad_norm 1.0 \
    --to_static $to_static \
    --num_hidden_layers 8 \
AndSonder commented 2 months ago

显存增长问题已找到,是 https://github.com/PaddlePaddle/PaddleNLP/pull/8667 中默认将 gradient_sync_after_accumulate 开启导致的