microsoft / DeepSpeedExamples

Example models using DeepSpeed
Apache License 2.0
5.97k stars 1.01k forks source link

Might be a bug of hibrid engine : In Step3 wrong generation secquence when hibrid engine is enabled. #503

Open laoda513 opened 1 year ago

laoda513 commented 1 year ago

When using where using hybrid engine, The output sequence always be 'a a a a ', while if I disabled hybrid engine,the output sequence is correct

here is my log with hybrid engine

***** Running training *****
Beginning of Epoch 1/1, Total Generation Batches 7626
------------------------------------------------------
Free memory : 7.433289 (GigaBytes)  
Total memory: 21.679382 (GigaBytes)  
Requested memory: 0.128906 (GigaBytes) 
Setting maximum total tokens (input + output) to 512 
WorkSpace: 0x7fb3de000000 
------------------------------------------------------
[2023-05-09 12:33:53,957] [INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
[2023-05-09 12:33:54,495] [INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
prompt: 

Human: need suggestions on passive income

Assistant: I can't really provide those right now. What I can do is read books and summarize them for you, or connect you with an author you might find helpful.

Human: ok what books do you recommend

Assistant: I can give you a list of the top books in many different categories.

Human: ok tell me more

Assistant:
gen:

Human: need suggestions on passive income

Assistant: I can't really provide those right now. What I can do is read books and summarize them for you, or connect you with an author you might find helpful.

Human: ok what books do you recommend

Assistant: I can give you a list of the top books in many different categories.

Human: ok tell me more

Assistant: a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
["\n\nHuman: need suggestions on passive income\n\nAssistant: I can't really provide those right now. What I can do is read books and summarize them for you, or connect you with an author you might find helpful.\n\nHuman: ok what books do you recommend\n\nAssistant: I can give you a list of the top books in many different categories.\n\nHuman: ok tell me more\n\nAssistant: a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a"]
epoch: 0|step: 0|ppo_ep: 1|act_loss: -0.03240966796875|cri_loss: 0.001739501953125|unsuper_loss: 0.0
average reward score: 1.099609375

the run_1.3b.sh

#!/bin/bash
# Copyright (c) Microsoft Corporation.
# SPDX-License-Identifier: Apache-2.0

# DeepSpeed Team
ACTOR_MODEL_PATH=$1
CRITIC_MODEL_PATH=$2
ACTOR_ZERO_STAGE=$3
CRITIC_ZERO_STAGE=$4
OUTPUT=$5
if [ "$OUTPUT" == "" ]; then
    OUTPUT=./output
fi
if [ "$ACTOR_ZERO_STAGE" == "" ]; then
    ACTOR_ZERO_STAGE=2
fi
if [ "$CRITIC_ZERO_STAGE" == "" ]; then
    CRITIC_ZERO_STAGE=2
fi
mkdir -p $OUTPUT

Num_Padding_at_Beginning=1 # this is model related

Actor_Lr=9.65e-6
Critic_Lr=5e-6

deepspeed --master_port 12346 main.py \
   --data_path Dahoas/rm-static \
   --data_split 2,4,4 \
   --actor_model_name_or_path $ACTOR_MODEL_PATH \
   --critic_model_name_or_path $CRITIC_MODEL_PATH \
   --num_padding_at_beginning 1 \
   --per_device_train_batch_size 1 \
   --per_device_mini_train_batch_size 1 \
   --generation_batch_numbers 1 \
   --ppo_epochs 1 \
   --max_answer_seq_len 256 \
   --max_prompt_seq_len 256 \
   --actor_learning_rate ${Actor_Lr} \
   --critic_learning_rate ${Critic_Lr} \
   --num_train_epochs 1 \
   --lr_scheduler_type cosine \
   --gradient_accumulation_steps 1 \
   --disable_actor_dropout \
   --num_warmup_steps 100 \
   --deepspeed --seed 1234 \
   --enable_hybrid_engine \
   --actor_zero_stage $ACTOR_ZERO_STAGE \
   --critic_zero_stage $CRITIC_ZERO_STAGE \
   --enable_ema \
   --output_dir $OUTPUT \
    &> $OUTPUT/training.log

The log without hybrid engine:

***** Running training *****
Beginning of Epoch 1/1, Total Generation Batches 7626
[2023-05-09 12:42:37,384] [INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
[2023-05-09 12:42:37,855] [INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
prompt: 

Human: need suggestions on passive income

Assistant: I can't really provide those right now. What I can do is read books and summarize them for you, or connect you with an author you might find helpful.

Human: ok what books do you recommend

Assistant: I can give you a list of the top books in many different categories.

Human: ok tell me more

Assistant:
gen:

Human: need suggestions on passive income

Assistant: I can't really provide those right now. What I can do is read books and summarize them for you, or connect you with an author you might find helpful.

Human: ok what books do you recommend

Assistant: I can give you a list of the top books in many different categories.

Human: ok tell me more

Assistant: Here are some suggestions:

- How to Make Money by Richard Branson
- How to Make Money by Mark Zuckerberg
- How to Make Money by Bill Gates
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money by Mark Zuckerberg
- How to Make Money
["\n\nHuman: need suggestions on passive income\n\nAssistant: I can't really provide those right now. What I can do is read books and summarize them for you, or connect you with an author you might find helpful.\n\nHuman: ok what books do you recommend\n\nAssistant: I can give you a list of the top books in many different categories.\n\nHuman: ok tell me more\n\nAssistant: Here are some suggestions:\n\n- How to Make Money by Richard Branson\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Bill Gates\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money by Mark Zuckerberg\n- How to Make Money"]
epoch: 0|step: 0|ppo_ep: 1|act_loss: -0.10125732421875|cri_loss: 0.0199127197265625|unsuper_loss: 0.0
average reward score: 2.783203125

and my device info:

NVIDIA Driver Version: 515.65
NVIDIA SMI CUDA Version: 11.7
PyTorch Version: 2.0.0+cu117
NVCC CUDA Version: 11.7
DeepSpeed Version: 0.9.2
Docker Version: 23.0.5
NVIDIA Container Toolkit Version: 1.13.1

image

lurenlym commented 1 year ago

If I enable hybrid_engine,step3 can not train,the errors like

Traceback (most recent call last):
  File "main.py", line 522, in <module>
    main()
  File "main.py", line 430, in main
    out = trainer.generate_experience(batch_prompt['prompt'],
  File "/gpt/DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/ppo_trainer.py", line 98, in generate_experience
    seq = self._generate_sequence(prompts, mask)
  File "/gpt/DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/ppo_trainer.py", line 73, in _generate_sequence
    seq = self.actor_model.module.generate(prompts,
  File "/gpt/DeepSpeed/deepspeed/runtime/hybrid_engine.py", line 266, in generate
    self.fuse_lora_weight()
  File "/gpt/DeepSpeed/deepspeed/runtime/hybrid_engine.py", line 144, in fuse_lora_weight
    self._fuse_lora(self.layer_params[layer_id], self.lora_params[layer_id])
  File "/gpt/DeepSpeed/deepspeed/runtime/hybrid_engine.py", line 140, in _fuse_lora
    weight.data += lora_scaling * torch.matmul(lora_left_weight.t(), lora_right_weight.t())
RuntimeError: The size of tensor a (6144) must match the size of tensor b (8192) at non-singleton dimension 0

my training_scripts

ACTOR_ZERO_STAGE="--actor_zero_stage 2"
CRITIC_ZERO_STAGE="--critic_zero_stage 2"
ACTOR_MODEL_PATH=/gpt/DeepSpeedExamples/applications/DeepSpeed-Chat/training/step1_supervised_finetuning/output/opt-1.3b-lora
CRITIC_MODEL_PATH=/gpt/DeepSpeedExamples/applications/DeepSpeed-Chat/training/step2_reward_model_finetuning/output
OUTPUT="./output/opt-1.3b-lora"

Num_Padding_at_Beginning=1 # this is model related

Actor_Lr=5e-4
Critic_Lr=5e-6

mkdir -p $OUTPUT

deepspeed --master_port 12346 main.py \
   --data_path Dahoas/rm-static \
   --data_split 2,4,4 \
   --actor_model_name_or_path $ACTOR_MODEL_PATH \
   --critic_model_name_or_path $CRITIC_MODEL_PATH \
   --num_padding_at_beginning 1 \
   --per_device_train_batch_size 4 \
   --per_device_mini_train_batch_size 4 \
   --generation_batch_numbers 1 \
   --ppo_epochs 1 \
   --max_answer_seq_len 256 \
   --max_prompt_seq_len 256 \
   --actor_learning_rate ${Actor_Lr} \
   --critic_learning_rate ${Critic_Lr} \
   --num_train_epochs 1 \
   --lr_scheduler_type cosine \
   --gradient_accumulation_steps 1 \
   --num_warmup_steps 100 \
   --deepspeed --seed 1234 \
   --disable_actor_dropout \
   ${ACTOR_ZERO_STAGE} \
   ${CRITIC_ZERO_STAGE} \
   --actor_lora_dim 128 \
   --enable_hybrid_engine \
   --actor_lora_module_name decoder.layers. \
   --output_dir $OUTPUT \
    &> $OUTPUT/training.log
~
amaarora commented 1 year ago

Same issue.

amaarora commented 1 year ago

How does one disable hybrid engine?

samadejacobs commented 1 year ago

@amaarora, to disable hybrid engine, remove _--enable_hybridengine flag from training script.

Alexandra9898 commented 1 year ago

I have the same issue with you! When I do not set enable_hybrid_engine, the error is solved. However, the training speed may become slower?

Junyiliu0 commented 1 year ago

But there will be a new problem. The module.generate will force the model generate a seq whose length is min_max_length which leads repeat in generation.

GeekDream-x commented 8 months ago

the issue till exists.....