meta-math / MetaMath

MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
https://meta-math.github.io
Apache License 2.0
387 stars 36 forks source link

OOM #12

Closed hbin0701 closed 12 months ago

hbin0701 commented 1 year ago

I'm wondering as to why running the training script gives me OOM error constantly. I'm following the exact sh file format, and I'm using 4 x A100 80GB, so I believe there should be no problem... do you have any idea why?

yulonghui commented 1 year ago

Hi, there are two changes you may make your setting work:

First: you can reduce the batch size, such as this sh:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m torch.distributed.launch --master_addr ${MASTER_ADDR} --master_port ${MASTER_PORT} --nproc_per_node=8 --use_env train_math.py \
    --model_name_or_path "path/to/llama-2" \
    --data_path "path/to/metamathqa" \
    --data_length 10000000 \
    --bf16 True \
    --output_dir "path/to/save" \
    --num_train_epochs 3 \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 1 \
    --gradient_accumulation_steps 16 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 1000 \
    --save_total_limit 2 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --fsdp "full_shard auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
    --tf32 True

Second, you can try to download your transformers version, where I found transformers version <=4.29.1 would save some CUDA memory, compared with a higher transformers version, such as transformers>=4.31.0