THUDM / GLM

GLM (General Language Model)
MIT License
3.18k stars 324 forks source link

将GLM-10B-chinese模型切分为MP_SIZE=8, 然后finetune seq2seq任务时,在eval阶段报错IndexError。怀疑eval没有以MP_SIZE=8方式运行 #149

Open webYFDT opened 1 year ago

webYFDT commented 1 year ago

将GLM-10B-chinese模型切分为MP_SIZE=8, 然后finetune seq2seq任务时,在训练阶段一切正常,在eval阶段报错IndexError。 GLM-10B-chinese模型切分为MP_SIZE=8后,每个rank embedding为维度为:50048/8=6256。 但是eval阶段时好像将模型当作MP_SIZE=1的情况进行评估模型。请问这块怎么配置或者怎么修改代码。

[2023-04-06 14:43:16,209] [INFO] [engine.py:1691:_save_zero_checkpoint] zero checkpoint saved ./finetune_checkpoints/GLM-10B-chinese-customization_04-06-14-11/180/zero_pp_rank_0_mp_rank_01optim_states.pt [2023-04-06 14:43:16,384] [INFO] [engine.py:1691:_save_zero_checkpoint] zero checkpoint saved ./finetune_checkpoints/GLM-10B-chinese-customization_04-06-14-11/180/zero_pp_rank_0_mp_rank_03optim_states.pt [2023-04-06 14:43:16,394] [INFO] [engine.py:1691:_save_zero_checkpoint] zero checkpoint saved ./finetune_checkpoints/GLM-10B-chinese-customization_04-06-14-11/180/zero_pp_rank_0_mp_rank_02optim_states.pt [2023-04-06 14:43:17,007] [INFO] [engine.py:1691:_save_zero_checkpoint] zero checkpoint saved ./finetune_checkpoints/GLM-10B-chinese-customization_04-06-14-11/180/zero_pp_rank_0_mp_rank_06optim_states.pt [2023-04-06 14:43:18,173] [INFO] [engine.py:1691:_save_zero_checkpoint] zero checkpoint saved ./finetune_checkpoints/GLM-10B-chinese-customization_04-06-14-11/180/zero_pp_rank_0_mp_rank_00optim_states.pt calculating metrics ... Distributed store created Traceback (most recent call last): File "finetune_glm.py", line 470, in <module> main(args) File "/root/paddlejob/workspace/env_run/GLM_code_model/GLM-main/tasks/seq2seq/finetune.py", line 147, in main finetune(args, train_valid_datasets_provider, {}, end_of_epoch_callback_provider=metrics_func_provider, File "/root/paddlejob/workspace/env_run/GLM_code_model/GLM-main/finetune_glm.py", line 419, in finetune best_iteration = _train(model, optimizer, lr_scheduler, forward_step, File "/root/paddlejob/workspace/env_run/GLM_code_model/GLM-main/finetune_glm.py", line 262, in _train score_dict = end_of_epoch_callback(model, epoch, summary_writer=summary_writer) File "/root/paddlejob/workspace/env_run/GLM_code_model/GLM-main/tasks/eval_utils.py", line 87, in metrics_func predictions, labels, examples = eval_func(model, dataloader, example_dict, args) File "/root/paddlejob/workspace/env_run/GLM_code_model/GLM-main/tasks/seq2seq/evaluate.py", line 325, in evaluate next_token_scores = self.processors(tokens, next_token_scores) File "/root/paddlejob/workspace/env_run/GLM_code_model/GLM-main/generation_utils.py", line 412, in __call__ scores = processor(input_ids, scores) File "/root/paddlejob/workspace/env_run/GLM_code_model/GLM-main/generation_utils.py", line 440, in __call__ scores[:, self.eos_token_id] = -float("inf") IndexError: index 50007 is out of bounds for dimension 1 with size 6256

694344851 commented 1 year ago

我这边为了MP_SIZE=4,在eval阶段没有报错,而是卡住不动了。

webYFDT commented 1 year ago

我这边为了MP_SIZE=4,在eval阶段没有报错,而是卡住不动了。

请问你MP_SIZE=4的时候下面启动脚本的3个sh文件配置能给我参考下吗?我不清楚是不是我有配置有问题导致这个原因: bash scripts/ds_finetune_seq2seq.sh config_tasks/model_blocklm_10B_chinese.sh config_tasks/seq_customization.sh 我的配置如下: 1、脚本scripts/ds_finetune_seq2seq.sh DATA_ROOT="data/seq2seq/customization_data/qg/" CHECKPOINT_PATH="../glm-10b-chinese_MP8" SAVE_PATH="./finetune_checkpoints" DATESTR=$(date +"%m-%d-%H-%M")

source $1 # Model source $2 # Task

NUM_WORKERS=2

NUM_WORKERS=1 NUM_GPUS_PER_WORKER=8 HOST_FILE_PATH="./hostfile" MP_SIZE=8

MP_SIZE=1

MASTER_PORT=$(shuf -n 1 -i 10000-65535)

OPTIONS_NCCL="NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2"

OPTIONS_NCCL="NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=1"

cd .. source glm-cuda102-env/glm_cuda102.sh cd - DISTRIBUTED_ARGS="${OPTIONS_NCCL} ../glm-cuda102-env/conda/bin/python ../glm-cuda102-env/conda/bin/deepspeed --hostfile ${HOST_FILE_PATH} --master_port ${MASTER_PORT} --num_nodes ${NUM_WORKERS} --num_gpus ${NUM_GPUS_PER_WORKER}"

EXPERIMENT_NAME=${EXPERIMENTNAME}${DATESTR} mkdir logs run_cmd="${DISTRIBUTED_ARGS} finetune_glm.py \ --deepspeed \ --deepspeed_config config_tasks/config_blocklm_10B_cnndm.json \ --finetune \ --experiment-name ${EXPERIMENT_NAME} \ --task ${TASK_NAME} \ --data-dir ${DATA_PATH} \ --save ${SAVE_PATH} \ --checkpoint-activations \ --num-workers 1 \ --no-load-lr-scheduler \ $MODEL_ARGS \ $TRAIN_ARGS \ $COMMON_ARGS \ $TASK_ARGS \ --fp16 \ --model-parallel-size ${MP_SIZE} \ --overwrite \ 2>&1 | tee logs/log-${EXPERIMENT_NAME}.txt"

echo ${run_cmd} eval ${run_cmd}

2、脚本config_tasks/model_blocklm_10B_chinese.sh MODEL_TYPE="GLM-10B-chinese" MODEL_ARGS="--block-lm \ --cloze-eval \ --task-mask \ --num-layers 48 \ --hidden-size 4096 \ --num-attention-heads 64 \ --max-position-embeddings 1024 \ --tokenizer-type ChineseSPTokenizer \ --load-pretrained ${CHECKPOINT_PATH}"

3、脚本config_tasks/seq_customization.sh EXPERIMENT_NAME=${MODEL_TYPE}-customization TASK_NAME=customization DATA_PATH="${DATA_ROOT}"

TRAIN_ARGS="--epochs 10 \ --lr 1e-5 \ --lr-decay-style linear \ --warmup 0.06 \ --label-smoothing 0.1"

COMMON_ARGS="--save-interval 10000 \ --log-interval 50 \ --eval-interval 1000 \ --eval-iters 100 \ --eval-epoch 2"

TASK_ARGS="--src-seq-length 512 \ --tgt-seq-length 128 \ --min-tgt-length 55 \ --length-penalty 0.7 \ --no-repeat-ngram-size 3 \ --num-beams 5 \ --select-topk \ --eval-batch-size 1"

694344851 commented 1 year ago

DATA_ROOT=./GLM-main/data CHECKPOINT_PATH=./GLM-main/models_glm SAVE_PATH=./GLM-main/data/finetune_checkpoints DATESTR=$(date +"%m-%d-%H-%M")

source $1 # Model source $2 # Task

NUM_WORKERS 预训练的服务器的数量

num_gpu GPU的数量

主机文件的路径 主机名或者ssh别名

mp_size 模型并行的大小

NUM_WORKERS=1 NUM_GPUS_PER_WORKER=4 HOST_FILE_PATH="./hostfile" MP_SIZE=4 MASTER_PORT=$(shuf -n 1 -i 10000-65535)

OPTIONS_NCCL="NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2"

DISTRIBUTED_ARGS="${OPTIONS_NCCL} deepspeed --hostfile ${HOST_FILE_PATH} --master_port ${MASTER_PORT} --num_nodes ${NUM_WORKERS} --num_gpus ${NUM_GPUS_PER_WORKER}"

DISTRIBUTED_ARGS="${OPTIONS_NCCL} deepspeed --master_port ${MASTER_PORT} --num_nodes ${NUM_WORKERS} --num_gpus ${NUM_GPUS_PER_WORKER}" EXPERIMENT_NAME=${EXPERIMENTNAME}${DATESTR} mkdir logs run_cmd="${DISTRIBUTED_ARGS} finetune_glm.py \ --deepspeed \ --deepspeed_config config_tasks/config_blocklm_10B_cnndm.json \ --finetune \ --experiment-name ${EXPERIMENT_NAME} \ --task ${TASK_NAME} \ --data-dir ${DATA_PATH} \ --save ${SAVE_PATH} \ --checkpoint-activations \ --num-workers 1 \ --no-load-lr-scheduler \ $MODEL_ARGS \ $TRAIN_ARGS \ $COMMON_ARGS \ $TASK_ARGS \ --fp16 \ --model-parallel-size ${MP_SIZE} \ --overwrite \ 2>&1 | tee logs/log-${EXPERIMENT_NAME}.txt"

echo ${run_cmd} eval ${run_cmd}

MODEL_TYPE="GLM-10B-chinese" MODEL_ARGS="--block-lm \ --cloze-eval \ --task-mask \ --num-layers 48 \ --hidden-size 4096 \ --num-attention-heads 64 \ --max-position-embeddings 1024 \ --tokenizer-type ChineseSPTokenizer \ --load-pretrained ${CHECKPOINT_PATH}/glm-10b-chinese_MP4"

EXPERIMENT_NAME=${MODEL_TYPE}-customization TASK_NAME=customization DATA_PATH="${DATA_ROOT}/customization"

TRAIN_ARGS="--epochs 1 \ --lr 1e-5 \ --lr-decay-style linear \ --warmup 0.06 \ --label-smoothing 0.1"

COMMON_ARGS="--save-interval 10000 \

--log-interval 50 \

--eval-interval 1000 \

--eval-iters 100 \

--eval-epoch 2"

COMMON_ARGS="--save-interval 10000 \ --log-interval 1 \ --eval-interval 3 \ --eval-iters 2 \ --eval-epoch 2"

TASK_ARGS="--src-seq-length 512 \ --tgt-seq-length 128 \ --min-tgt-length 55 \ --length-penalty 0.7 \ --no-repeat-ngram-size 3 \ --num-beams 5 \ --select-topk \ --eval-batch-size 1"

我也不知道哪里有问题了,一开始eval就卡住不动了

694344851 commented 1 year ago

你好,我想知道,将模型分为几个部分之后,您那边的load-pretrained 的模型地址里都包含什么东西,就是checkpoint_path应该什么设置

allendred commented 1 year ago

同样的问题

AlanTubring commented 1 year ago

想问 脚本scripts/ds_finetune_seq2seq.sh 中的 CHECKPOINT_PATH="../glm-10b-chinese_MP8" 是 哪里来的

kunden0612 commented 1 year ago

想问 脚本scripts/ds_finetune_seq2seq.sh 中的 CHECKPOINT_PATH="../glm-10b-chinese_MP8" 是 哪里来的

change_mp.py可以根据你的MP更改