alibaba / Pai-Megatron-Patch

The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
Apache License 2.0
644 stars 93 forks source link

llama3-8b 初始loss偏高 #237

Closed EthanChen1234 closed 3 months ago

EthanChen1234 commented 3 months ago

环境

镜像:ngc24.02 代码:commid id:1f3c6fc07750dee17d8eba5b0d9c64b66569101f 数据:wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/llama3-datasets/wudao_llama3bpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/llama3-datasets/wudao_llama3bpe_content_document.idx

权重转换

Pai-Megatron-Patch/toolkits/model_checkpoints_convertor/llama/hf2megatron_convertor.sh ` SOURCE_CKPT_PATH=/shared-data0/llm-models/llama3_8b/ TARGET_CKPT_PATH=/shared-data0/weights/llama3-8b-tp1pp2/ TP=1 PP=2 MN=llama3-8b EXTRA_VOCAB_SIZE=0 # 词表扩充大小,随机初始化 mg2hf=false

if [ $mg2hf = true ]; then do_options=" --convert_checkpoint_from_megatron_to_transformers " elif [ $mg2hf = false ]; then do_options="" fi

export PYTHONPATH=$PYTHONPATH:${MEGATRON_PATH}:${MEGATRON_PATH}/Megatron-LM-231007

python hf2megatron.py \ --load_path ${SOURCE_CKPT_PATH} \ --save_path ${TARGET_CKPT_PATH} \ --target_params_dtype fp16 \ --megatron-path ${MEGATRON_PATH} \ --target_tensor_model_parallel_size ${TP} \ --target_pipeline_model_parallel_size ${PP} \ --model_name ${MN} \ --extra_num_vocabs ${EXTRA_VOCAB_SIZE} \ ${do_options}`

运行脚本

`

!/bin/bash

set -ex ENV=dsw # 运行环境: dlc, dsw MEGATRON_PATCH_PATH=/shared-data0/code/pai-megatron-patch/ MEGATRON_PATH=${MEGATRON_PATCH_PATH}/Megatron-LM-231007 export PYTHONPATH=${MEGATRON_PATH}:${MEGATRON_PATCH_PATH}:$PYTHONPATH export CUDA_DEVICE_MAX_CONNECTIONS=1

if [ $ENV = dsw ]; then GPUS_PER_NODE=2 export CUDA_VISIBLE_DEVICES=0,1 MASTER_ADDR=localhost MASTER_PORT=$(shuf -n 1 -i 10000-65535) NNODES=1 NODE_RANK=0 elif [ $ENV = dlc ]; then NNODES=${WORLD_SIZE} NODE_RANK=${RANK} GPUS_PER_NODE=${KUBERNETES_CONTAINER_RESOURCE_GPU} fi

MODEL_SIZE=8B TP=1 PP=2 AC=full # sel, full DO=true # Zero-1 FL=true # Flash Attentio SP=false # sequence parallel TE=false # transformer engine

BATCH_SIZE=1
GLOBAL_BATCH_SIZE=128
LR=2e-5 MIN_LR=2e-6 SEQ_LEN=4096 PAD_LEN=$SEQ_LEN
EXTRA_VOCAB_SIZE=256 PR=bf16 # fp16, bf16 SAVE_INTERVAL=100 DATASET_PATH=/shared-data0/data/pretrain/llama3-demo/wudao_llama3bpe_content_document PRETRAIN_CHECKPOINT_PATH=/shared-data0/weights/llama3-8b-tp1pp2 OUTPUT_DIR=/shared-data0/work/llama3-test/

TRAIN_ITERS=5 LR_WARMUP_ITERS=1 LR_DECAY_ITERS=4 if [ $MODEL_SIZE = 8B ]; then

NUM_LAYERS=32 HIDDEN_SIZE=4096 NUM_ATTN_HEADS=32 INTERMEDIATE_SIZE=14336 NUM_KEY_VALUE_HEADS=8 MAX_POSITION_EMBEDDINGS=8192 export TRAIN_PARAMETERS=8030527488

gqa_options=" \ --group-query-attention \ --num-query-groups ${NUM_KEY_VALUE_HEADS}"

fi

if [ $AC = full ]; then activation_checkpoint_options=" \ --recompute-granularity full --recompute-method uniform \ --recompute-num-layers 4" elif [ $AC = sel ]; then activation_checkpoint_options=" \ --recompute-activations" elif [ $AC = none ]; then activation_checkpoint_options=" \ " fi

if [ $PR = fp16 ]; then pr_options=" \ --fp16" elif [ $PR = bf16 ]; then pr_options=" \ --bf16" elif [ $PR = fp8 ]; then pr_options=" \ --bf16 \ --fp8-hybrid \ --fp8-amax-compute-algo max \ --fp8-amax-history-len 1024 \ --transformer-impl transformer_engine" fi

if [ $DO = true ]; then do_options=" \ --use-distributed-optimizer"

elif [ $DO = false ]; then do_options=" \ " fi

if [ $FL = true ]; then flash_options=" \ --use-flash-attn"

elif [ $FL = false ]; then flash_options=" \ " fi

if [ $TE = true ]; then te_options=" \ --transformer-impl transformer_engine"

elif [ $TE = false ]; then te_options=" \ --transformer-impl local" fi

if [ $SP = true ] && [ $TP -gt 1 ]; then sp_options=" \ --sequence-parallel"

elif [ $SP = false ]; then sp_options=" \ " fi

if [ $PRETRAIN_CHECKPOINT_PATH != none ]; then load_options=" \ --load $PRETRAIN_CHECKPOINT_PATH" fi

megatron_options=" \ --split 99,1,0 \ --train-data-path ${DATASET_PATH} \ --data-path ${DATASET_PATH} \ --lr ${LR} \ --min-lr ${MIN_LR} \ --lr-decay-style linear \ --adam-beta1 0.9 \ --adam-beta2 0.95 \ --weight-decay 0.1 \ --clip-grad 1.0 \ --init-method-std 0.006 \ --lr-decay-iters ${LR_DECAY_ITERS} \ --lr-warmup-iters ${LR_WARMUP_ITERS} \ --train-iters ${TRAIN_ITERS} \ --micro-batch-size ${BATCH_SIZE} \ --global-batch-size ${GLOBAL_BATCH_SIZE} \ --num-layers ${NUM_LAYERS} \ --hidden-size ${HIDDEN_SIZE} \ --num-attention-heads ${NUM_ATTN_HEADS} \ --ffn-hidden-size ${INTERMEDIATE_SIZE} \ --seq-length ${SEQ_LEN} \ --max-position-embeddings ${MAX_POSITION_EMBEDDINGS} \ --max-padding-length ${PAD_LEN} \ --log-interval 1 \ --eval-interval 10000 \ --eval-iters 0 \ --save-interval ${SAVE_INTERVAL} \ --tensorboard-queue-size 1 \ --tensorboard-dir ${OUTPUT_DIR}/tensorboard/ \ --log-timers-to-tensorboard \ --log-batch-size-to-tensorboard \ --log-validation-ppl-to-tensorboard \ --tensor-model-parallel-size ${TP} \ --pipeline-model-parallel-size ${PP} \ --dataset LLama-Pretrain-Idxmap \ --no-load-optim \ --no-load-rng \ --num-workers 8 \ --seed 1234 \ --extra-vocab-size ${EXTRA_VOCAB_SIZE} \ --patch-tokenizer-type LLamaTokenizer \ --swiglu \ --normalization RMSNorm \ --use-rotary-position-embeddings \ --position-embedding-type rope \ --untie-embeddings-and-output-weights \ --rotary-base 500000 \ --attention-dropout 0.0 \ --hidden-dropout 0.0 \ --disable-bias-linear \ --norm-epsilon 1e-05 \ "

--save ${OUTPUT_DIR} \

DISTRIBUTED_ARGS="--nproc_per_node $GPUS_PER_NODE --nnodes $NNODES --node_rank $NODE_RANK --master_addr $MASTER_ADDR --master_port $MASTER_PORT"

run_cmd="torchrun $DISTRIBUTED_ARGS ../llama2/pretrain_megatron_llama.py ${megatron_options} ${pr_options} ${load_options} ${te_options} ${activation_checkpoint_options} ${do_options} ${flash_options} ${sp_options} ${gqa_options}"

echo ${run_cmd} $run_cmd `

初始loss

image

jerryli1981 commented 3 months ago

收到,您也方便快速测试下tp2pp1嘛?我这边复现下你的tp1pp2

EthanChen1234 commented 3 months ago

收到,您也方便快速测试下tp2pp1嘛?我这边复现下你的tp1pp2

策略TP2PP1,初始loss 也是在7.5,与TP1PP2是一致的。 image

另外,补充下有diff的参数:单机2卡,seq_len=4096

jerryli1981 commented 3 months ago

您好,辛苦看下这个PR:https://github.com/alibaba/Pai-Megatron-Patch/pull/238

当前的问题不是loss偏高的问题,是llama3实现指向了qwen1.5。这个PR是将llama3和qwen1.5解耦合,各自调用各自的。然后我重新按照ReadMe测试了下megatron和mcore,loss都是2.x

image