axolotl-ai-cloud / axolotl

Go ahead and axolotl questions
https://axolotl-ai-cloud.github.io/axolotl/
Apache License 2.0
7.97k stars 879 forks source link

DPO training -7B model, OOM on A6000 #1108

Open vip-china opened 10 months ago

vip-china commented 10 months ago

Please check that this issue hasn't been reported before.

Expected Behavior

Capable of DPO training for 7B models in A6000-48G memory environment

Current behaviour

I used an A6000 (48GiB) * 8 device for DPO training based on the 7B model, and during the training, there was OOM behavior. For the official training of dolphin-2.6-microl-7b-dpo using dolphin-dpo.yml, I have tried to reduce the graphics memory through parameters, but OOM still occurs。 1、The comparison of YML is as follows:

1)Official DPO training using YML(https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo/blob/main/configs/dolphin-dpo.yml

base_model: cognitivecomputations/dolphin-2.6-mistral-7b
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: true
strict: false

rl: true
datasets:
  - path: argilla/ultrafeedback-binarized-preferences-cleaned
    split: train
    type: ultra_apply_chatml
  - path: unalignment/toxic-dpo-v0.1
    split: train
    type: toxic_apply_chatml

dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: /workspace/dolphin-2.6-mistral-7b-dpo

adapter: qlora
lora_model_dir:

sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false

lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

wandb_project: dolphin
wandb_entity: 
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
eval_steps:
eval_table_size:
eval_table_max_new_tokens: 128
save_steps: 239
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
save_safetensors: true

2)The YMl I am using

base_model: /data1/ljf2/data/dolphin-2.6-mistral-7b-sft-yhy
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

rl: true
datasets:
  - path: /data1/ljf2/data/dpo-16k-new
    split: train
    type: ultra_apply_chatml

dataset_prepared_path: /data1/ljf2/data/data_out
val_set_size: 0.0
output_dir: /data1/ljf2/data/dpo_out

adapter: lora
lora_model_dir:

sequence_len: 16384
sample_packing: false
pad_to_sequence_len: false

lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - q_proj
  - v_proj
  - k_proj
  - o_proj
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
save_steps: 100
save_total_limit: 3
debug:
deepspeed: /data1/ljf2/data/load_zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
save_safetensors: true

2、Screenshot of video memory usage: 08af85895ab2d4c58d03e8ceeb163b7

3、OOM Error screenshot: 1a3a9e80263eae939f3685aea2e5f01

4、Deepspeed zero configuration used:

{
  "train_batch_size": "auto",
  "train_micro_batch_size_per_gpu": "auto",
  "gradient_accumulation_steps": "auto",
  "gradient_clipping": "auto",
  "zero_allow_untested_optimizer": true,
  "bf16": {
    "enabled": "auto",
    "loss_scale": 0,
    "initial_scale_power": 16,
    "loss_scale_window": 1000,
    "hysteresis": 2,
    "min_loss_scale": 1
  },
  "zero_optimization": {
    "stage": 2,
    “offload_optimizer": {
    "device": "cpu",
    "pin_memory": true
    },
    "offload_optimizer": {
    "device": "cpu",
    "pin_memory": true
    },
    "allgather_partitions": true,
    "allgather_bucket_size": 5e8,
    "reduce_scatter": true,
    "reduce_bucket_size": 5e8,
    "overlap_comm": false,
    "contiguous_gradients": true
  }
}

Steps to reproduce

docker:winglian/axolotl:main-py3.10-cu118-2.0.1 Start Training: export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256 export TOKENIZERS_PARALLELISM="true" accelerate launch -m axolotl.cli.train dpo.yml

Config yaml

No response

Possible solution

No response

Which Operating Systems are you using?

Python Version

3.10

axolotl branch-commit

main

Acknowledgements

winglian commented 10 months ago

you need to set one of these to true

load_in_8bit: false
load_in_4bit: false

and then set adapter: to lora or qlora depending on the one you set to true