hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.29k stars 4.23k forks source link

2Nodes * 8 A100 80G sft full Qwen2VL OOM #5590

Open VincentVanNF opened 1 month ago

VincentVanNF commented 1 month ago

Reminder

System Info

根据文档估计的显卡资源70B 模型大概需要600G,目前已经使用了双机 A100 80G总共16卡,远远超过600G,但是每张卡仍然会OOM: 训练脚本:

  DISTRIBUTED_ARGS="
      --nproc_per_node $GPUS_PER_NODE \
      --nnodes $NNODES \
      --node_rank $NODE_RANK \
      --master_addr $MASTER_ADDR \
      --master_port $MASTER_PORT
      "
  torchrun $DISTRIBUTED_ARGS src/train.py \
      --deepspeed $DS_CONFIG_PATH \
      --stage sft \
      --do_train \
      --model_name_or_path $MODEL_PATH \
      --dataset_dir $DATASET \
      --dataset $dataset \
      --template qwen2_vl \
      --finetuning_type full \
      --output_dir $OUTPUT_PATH \
      --overwrite_cache \
      --overwrite_output_dir \
      --warmup_ratio 0.1 \
      --weight_decay 0.1 \
      --per_device_train_batch_size 1 \
      --per_device_eval_batch_size 1 \
      --gradient_accumulation_steps 16 \
      --ddp_timeout 180000000 \
      --learning_rate 1e-6 \
      --lr_scheduler_type cosine \
      --logging_steps 200 \
      --cutoff_len 2048 \
      --save_strateg epoch \
      --plot_loss \
      --compute_accuracy \
      --num_train_epochs 6 \
      --bf16 \
      --image_resolution 448 \
      --fix_embedding False \
      --fix_vit False \
      --attn_implementation $attn_implementation

使用的ds_z3_config.json:

 { 
  "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "zero_allow_untested_optimizer": true,
    "fp16": {
      "enabled": "auto",
      "loss_scale": 0,
      "loss_scale_window": 1000,
      "initial_scale_power": 16,
      "hysteresis": 2,
      "min_loss_scale": 1
    },
    "bf16": {
      "enabled": "auto"
    },
    "zero_optimization": {
      "stage": 3,
      "overlap_comm": true,
      "contiguous_gradients": true,
      "sub_group_size": 1e9,
      "reduce_bucket_size": "auto",
      "stage3_prefetch_bucket_size": "auto",
      "stage3_param_persistence_threshold": "auto",
      "stage3_max_live_parameters": 1e9,
      "stage3_max_reuse_distance": 1e9,
      "stage3_gather_16bit_weights_on_model_save": true
    }
}

报错:

16:49:33.628 Traceback (most recent call last):
16:49:33.628   File "/workdir/src/train.py", line 31, in <module>
16:49:33.628     main()
16:49:33.628   File "/workdir/src/train.py", line 22, in main
16:49:33.628     run_exp()
16:49:33.628   File "/workdir/src/llamafactory/train/tuner.py", line 53, in run_exp
16:49:33.628     run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
16:49:33.628   File "/workdir/src/llamafactory/train/sft/workflow.py", line 129, in run_sft
16:49:33.628     train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
16:49:33.628   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1991, in train
16:49:33.628 Traceback (most recent call last):
16:49:33.628   File "/workdir/src/train.py", line 31, in <module>
16:49:33.628     main()
16:49:33.628   File "/workdir/src/train.py", line 22, in main
16:49:33.628         run_exp()
16:49:33.628 return inner_training_loop(
16:49:33.628   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2327, in _inner_training_loop
16:49:33.628   File "/workdir/src/llamafactory/train/tuner.py", line 53, in run_exp
16:49:33.628     run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
16:49:33.628   File "/workdir/src/llamafactory/train/sft/workflow.py", line 129, in run_sft
16:49:33.628     train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
16:49:33.628   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1991, in train
16:49:33.629     tr_loss_step = self.training_step(model, inputs)
16:49:33.629   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 3452, in training_step
16:49:33.629     return inner_training_loop(
16:49:33.629   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2327, in _inner_training_loop
16:49:33.629 Traceback (most recent call last):
16:49:33.629     tr_loss_step = self.training_step(model, inputs)
16:49:33.629   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 3452, in training_step
16:49:33.629   File "/workdir/src/train.py", line 31, in <module>
16:49:33.629     self.accelerator.backward(loss, **kwargs)
16:49:33.629   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/accelerator.py", line 2188, in backward
16:49:33.629     main()
16:49:33.629   File "/workdir/src/train.py", line 22, in main
16:49:33.629     run_exp()
16:49:33.629   File "/workdir/src/llamafactory/train/tuner.py", line 53, in run_exp
16:49:33.629 Traceback (most recent call last):
16:49:33.629   File "/workdir/src/train.py", line 31, in <module>
16:49:33.629     run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
16:49:33.629   File "/workdir/src/llamafactory/train/sft/workflow.py", line 129, in run_sft
16:49:33.629     train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
16:49:33.629   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1991, in train
16:49:33.629     self.deepspeed_engine_wrapped.backward(loss, **kwargs)
16:49:33.629   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/utils/deepspeed.py", line 175, in backward
16:49:33.629     main()
16:49:33.629   File "/workdir/src/train.py", line 22, in main
16:49:33.629     run_exp()
16:49:33.629   File "/workdir/src/llamafactory/train/tuner.py", line 53, in run_exp
16:49:33.629     self.engine.step()
16:49:33.630   File "/home/hadoop.local/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2160, in step
16:49:33.630     run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
16:49:33.630   File "/workdir/src/llamafactory/train/sft/workflow.py", line 129, in run_sft
16:49:33.630     self.accelerator.backward(loss, **kwargs)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/accelerator.py", line 2188, in backward
16:49:33.630     train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1991, in train
16:49:33.630     return inner_training_loop(
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2327, in _inner_training_loop
16:49:33.630     self._take_model_step(lr_kwargs)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2066, in _take_model_step
16:49:33.630     self.deepspeed_engine_wrapped.backward(loss, **kwargs)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/utils/deepspeed.py", line 175, in backward
16:49:33.630     self.engine.step()    
16:49:33.630 return inner_training_loop(
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2160, in step
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2327, in _inner_training_loop
16:49:33.630     self.optimizer.step()
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
16:49:33.630     tr_loss_step = self.training_step(model, inputs)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 3452, in training_step
16:49:33.630     ret_val = func(*args, **kwargs)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 2050, in step
16:49:33.630     self._take_model_step(lr_kwargs)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2066, in _take_model_step
16:49:33.630     tr_loss_step = self.training_step(model, inputs)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/transformers/trainer.py", line 3452, in training_step
16:49:33.630     self._optimizer_step(sub_group_id)
16:49:33.630   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 947, in _optimizer_step
16:49:33.631     self.optimizer.step()
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
16:49:33.631     ret_val = func(*args, **kwargs)
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 2050, in step
16:49:33.631     self.accelerator.backward(loss, **kwargs)
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/accelerator.py", line 2188, in backward
16:49:33.631     self.optimizer.step()
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
16:49:33.631     self.accelerator.backward(loss, **kwargs)
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/accelerator.py", line 2188, in backward
16:49:33.631     self._optimizer_step(sub_group_id)
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 947, in _optimizer_step
16:49:33.631     self.deepspeed_engine_wrapped.backward(loss, **kwargs)
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/utils/deepspeed.py", line 175, in backward
16:49:33.631     return wrapped(*args, **kwargs)
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/optimizer.py", line 373, in wrapper
16:49:33.631     self.engine.step()
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2160, in step
16:49:33.631     self.optimizer.step()
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
16:49:33.631     return wrapped(*args, **kwargs)
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/optimizer.py", line 373, in wrapper
16:49:33.631     self.deepspeed_engine_wrapped.backward(loss, **kwargs)
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/accelerate/utils/deepspeed.py", line 175, in backward
16:49:33.631     self.engine.step()
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2160, in step
16:49:33.631         out = func(*args, **kwargs)out = func(*args, **kwargs)
16:49:33.631 
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/optimizer.py", line 76, in _use_grad
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/optimizer.py", line 76, in _use_grad
16:49:33.631     self._take_model_step(lr_kwargs)
16:49:33.631   File "/home/hadoop/.local/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2066, in _take_model_step
16:49:33.631     ret = func(self, *args, **kwargs)
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/adamw.py", line 184, in step
16:49:33.631     ret = func(self, *args, **kwargs)
16:49:33.631   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/adamw.py", line 173, in step
16:49:33.631         self._init_group(adamw(
16:49:33.631 
16:49:33.632   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/adamw.py", line 125, in _init_group
16:49:33.632   File "/usr/local/conda/lib/python3.9/site-packages/torch/optim/adamw.py", line 335, in adamw
16:49:33.632     state["exp_avg_sq"] = torch.zeros_like(
16:49:33.632torch.cuda.OutOfMemoryError:     CUDA out of memory. Tried to allocate 3.74 GiB. GPU 0 has a total capacty of 79.15 GiB of which 1.33 GiB is free. Process 55179 has 77.81 GiB memory in use. Of the allocated memory 57.88 GiB is allocated by PyTorch, and 17.62 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONFself.optimizer.step()

Reproduction

  DISTRIBUTED_ARGS="
      --nproc_per_node $GPUS_PER_NODE \
      --nnodes $NNODES \
      --node_rank $NODE_RANK \
      --master_addr $MASTER_ADDR \
      --master_port $MASTER_PORT
      "

  torchrun $DISTRIBUTED_ARGS src/train.py \
      --deepspeed $DS_CONFIG_PATH \
      --stage sft \
      --do_train \
      --model_name_or_path $MODEL_PATH \
      --dataset_dir $DATASET \
      --dataset $dataset \
      --template qwen2_vl \
      --finetuning_type full \
      --output_dir $OUTPUT_PATH \
      --overwrite_cache \
      --overwrite_output_dir \
      --warmup_ratio 0.1 \
      --weight_decay 0.1 \
      --per_device_train_batch_size 1 \
      --per_device_eval_batch_size 1 \
      --gradient_accumulation_steps 16 \
      --ddp_timeout 180000000 \
      --learning_rate 1e-6 \
      --lr_scheduler_type cosine \
      --logging_steps 200 \
      --cutoff_len 2048 \
      --save_strateg epoch \
      --plot_loss \
      --compute_accuracy \
      --num_train_epochs 6 \
      --bf16 \
      --image_resolution 448 \
      --fix_embedding False \
      --fix_vit False \
      --attn_implementation $attn_implementation

Expected behavior

No response

Others

No response

jedcheng commented 1 month ago

For the 72B text model, I used 4 * 4 H100 96GB. CPT was fine but it went OOM in SFT (cutoff = 4096 and DS zero3)

It is fine for me to use LoRA for SFT, so it doesn't matter to me.

If we are seeing OOM on the 72B text model with 256GB more vRAM than your setup, your VL will likely not work. (Correct me if I'm wrong)

xingenju commented 3 days ago

Can you help provide your config for 72B LoRA for SFT? @jedcheng