hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
31.07k stars 3.83k forks source link

ppo阶段报错RuntimeError: Tensors must be CUDA and dense #831

Closed yuye2133 closed 1 year ago

yuye2133 commented 1 year ago

基座使用baichuan-13b,sft是全参数微调,rm在sft基础上lora微调,ppo启动脚本如下:

export CUDA_VISIBLE_DEVICES=1,2,3,4

deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py \
    --deepspeed deepspeed_zero3.json \
    --stage ppo \
    --model_name_or_path baichuan-sft \
    --do_train \
    --dataset ppo_data \
    --template baichuan \
    --finetuning_type lora \
    --lora_target W_pack \
    --resume_lora_training False \
    --reward_model rm_output \
    --output_dir ppo_output \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 2 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 1e-5 \
    --num_train_epochs 1.0 \
    --plot_loss

deepspeed配置文件:

{
  "bf16": {
    "enabled": "auto"
  },
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": "auto",
      "betas": "auto",
      "eps": "auto",
      "weight_decay": "auto"
    }
  },
  "zero_optimization": {
    "stage": 3,
    "overlap_comm": true,
    "contiguous_gradients": true,
    "sub_group_size": 1e9,
    "reduce_bucket_size": "auto",
    "stage3_prefetch_bucket_size": "auto",
    "stage3_param_persistence_threshold": "auto",
    "stage3_max_live_parameters": 1e9,
    "stage3_max_reuse_distance": 1e9,
    "stage3_gather_16bit_weights_on_model_save": true
  },
  "gradient_accumulation_steps": "auto",
  "gradient_clipping": "auto",
  "steps_per_print": 2000,
  "train_batch_size": "auto",
  "train_micro_batch_size_per_gpu": "auto",
  "wall_clock_breakdown": false
}

报错信息:

09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer - ***** Running training *****
09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer -   Num examples = 63133
09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer -   Num Epochs = 1.0
09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer -   Instantaneous batch size per device = 2
09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer -   Total train batch size (w. parallel, distributed & accumulation) = 16
09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer -   Gradient Accumulation steps = 2
09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer -   Total optimization steps = 3945
09/07/2023 18:58:33 - INFO - llmtuner.tuner.ppo.trainer -   Number of trainable parameters = 6558721
  0%|                                                                                           | 0/3945 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/home/code/ppo_test/src/train_bash.py", line 14, in <module>
    main()
  File "/home/code/ppo_test/src/train_bash.py", line 5, in main
    run_exp()
  0%|                                                                                           | 0/3945 [00:00<?, ?it/s]  File "/home/code/ppo_test/src/llmtuner/tuner/tune.py", line 30, in run_exp

Traceback (most recent call last):
  File "/home/code/ppo_test/src/train_bash.py", line 14, in <module>
    run_ppo(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/workflow.py", line 81, in run_ppo
    ppo_trainer.ppo_train(max_target_length=data_args.max_target_length)
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 101, in ppo_train
    main()
  File "/home/code/ppo_test/src/train_bash.py", line 5, in main
    queries, responses = self.get_inputs(batch, length_sampler, **gen_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    Traceback (most recent call last):
run_exp()
  File "/home/code/ppo_test/src/train_bash.py", line 14, in <module>
  File "/home/code/ppo_test/src/llmtuner/tuner/tune.py", line 30, in run_exp
    return func(*args, **kwargs)
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 157, in get_inputs
    run_ppo(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/workflow.py", line 81, in run_ppo
    response: torch.Tensor = unwrapped_model.generate(**batch, **generation_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/trl/models/modeling_value_head.py", line 198, in generate
    ppo_trainer.ppo_train(max_target_length=data_args.max_target_length)
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 101, in ppo_train
    main()
  File "/home/code/ppo_test/src/train_bash.py", line 5, in main
    queries, responses = self.get_inputs(batch, length_sampler, **gen_kwargs)
      File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
run_exp()
      File "/home/code/ppo_test/src/llmtuner/tuner/tune.py", line 30, in run_exp
return self.pretrained_model.generate(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 977, in generate
        run_ppo(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)return func(*args, **kwargs)

  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/workflow.py", line 81, in run_ppo
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 157, in get_inputs
    ppo_trainer.ppo_train(max_target_length=data_args.max_target_length)
      File "/home/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 101, in ppo_train
response: torch.Tensor = unwrapped_model.generate(**batch, **generation_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/trl/models/modeling_value_head.py", line 198, in generate
    queries, responses = self.get_inputs(batch, length_sampler, **gen_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return self.pretrained_model.generate(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 977, in generate
    return func(*args, **kwargs)
  File "/home/llms/yunjian/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 157, in get_inputs
    outputs = self.base_model.generate(**kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    response: torch.Tensor = unwrapped_model.generate(**batch, **generation_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/trl/models/modeling_value_head.py", line 198, in generate
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1588, in generate
    return self.pretrained_model.generate(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 977, in generate
    outputs = self.base_model.generate(**kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1588, in generate
    outputs = self.base_model.generate(**kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return self.sample(
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2642, in sample
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1588, in generate
    return self.sample(
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2642, in sample
    return self.sample(
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2642, in sample
    outputs = self(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    outputs = self(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 449, in forward
    outputs = self(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    outputs = self.model(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 449, in forward
    outputs = self.model(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 449, in forward
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 311, in forward
    outputs = self.model(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    inputs_embeds = self.embed_tokens(input_ids)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 311, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 311, in forward
    result = forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
        inputs_embeds = self.embed_tokens(input_ids)return F.embedding(

  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
Traceback (most recent call last):
    result = forward_call(*args, **kwargs)  File "/home/code/ppo_test/src/train_bash.py", line 14, in <module>

  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
    main()
      File "/home/code/ppo_test/src/train_bash.py", line 5, in main
result = forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
        run_exp()return F.embedding(

  File "/home/code/ppo_test/src/llmtuner/tuner/tune.py", line 30, in run_exp
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
    run_ppo(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/workflow.py", line 81, in run_ppo
    ppo_trainer.ppo_train(max_target_length=data_args.max_target_length)
  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 101, in ppo_train
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
    queries, responses = self.get_inputs(batch, length_sampler, **gen_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)    
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)  File "/home/code/ppo_test/src/llmtuner/tuner/ppo/trainer.py", line 157, in get_inputs

RuntimeError: 'weight' must be 2-D
    response: torch.Tensor = unwrapped_model.generate(**batch, **generation_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/trl/models/modeling_value_head.py", line 198, in generate
    return self.pretrained_model.generate(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 977, in generate
    outputs = self.base_model.generate(**kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1588, in generate
    return self.sample(
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2642, in sample
    outputs = self(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 449, in forward
    outputs = self.model(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/huggingface/modules/transformers_modules/baichuan-sft/modeling_baichuan.py", line 311, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
hiyouga commented 1 year ago

PPO 阶段不支持 DeepSpeed,仅支持 Accelerate

yuye2133 commented 1 year ago

PPO 阶段不支持 DeepSpeed,仅支持 Accelerate

Accelerate 的启动命令应该是怎么样呢,可以指定几个gpu吗?