huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.39k stars 26.36k forks source link

Cosine LR Scheduler not decaying #33523

Open zanqi opened 6 days ago

zanqi commented 6 days ago

System Info

NA

Who can help?

@muellerz @sunma

Information

Tasks

Reproduction

I ran the following command to start a training job, but the learning rate do not decay as expected. Do I need to change any param to make the cosine schedule work?

image

torchrun --nnodes=$n_node --nproc_per_node=1 --master_port=25001 \
    --master_addr "127.0.0.1" --node_rank=$CURRENT_RANK \
    llava/train/train_mem.py \
    --deepspeed ./scripts/zero3_offload.json \
    --model_name_or_path $BASE_MODEL_PATH \
    --version v1 \
    --data_path ../LLaVA/armbench/train/dataset_xyxy.json \
    --validation_data_path ../LLaVA/armbench/validation/dataset_xyxy.json \
    --image_folder ../LLaVA/armbench/images/ \
    --vision_tower google/siglip-so400m-patch14-384 \
    --s2 True \
    --s2_scales "384,768" \
    --s2_max_split_size 384 \
    --mm_vision_select_feature cls_patch \
    --mm_projector mlp_downsample \
    --tune_vision_tower False \
    --tune_mm_projector True \
    --tune_language_model True \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --image_aspect_ratio resize \
    --bf16 True \
    --output_dir ./checkpoints/$OUTPUT \
    --num_train_epochs 1 \
    --per_device_train_batch_size 32 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --evaluation_strategy "steps" \
    --eval_steps 0.1 \
    --save_strategy "steps" \
    --save_steps 100 \
    --save_total_limit 1 \
    --learning_rate 1e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True \
    --vflan_no_system_prompt True \
    --report_to wand

Expected behavior

LR decay as a cosine curve

LysandreJik commented 5 days ago

cc @SunMarc

SunMarc commented 5 days ago

Hey @zanqi, thanks for the report. I am unable to reproduce your results ? Could you share a minimal reproducer ? I get the following result in my case:

Screenshot 2024-09-17 at 11 59 12 AM

As you can see, we do have a warm up phase then the cosine decay. in the photo you shared, the wramup doesn't seems to be linear also.

I've used the following script : https://github.com/SunMarc/minimal-trainer-zoo/blob/main/causal_language_modeling.py with these args:

training_args = TrainingArguments(
    output_dir="results/causal_language_modeling",  # Where weights are stored
    learning_rate=1e-5,  # The learning rate during training
    per_device_train_batch_size=8,  # Number of samples per batch during training
    per_device_eval_batch_size=8,  # Number of samples per batch during evaluation
    num_train_epochs=10,  # How many iterations through the dataloaders should be done
    weight_decay=0,  # Regularization penalization
    evaluation_strategy="epoch",  # How often metrics on the evaluation dataset should be computed
    save_strategy="epoch",  # When to try and save the best model (such as a step number or every iteration)
    lr_scheduler_type="cosine",
    report_to="wandb",
    warmup_ratio= 0.03,
    logging_steps=1, # to log every steps, otherwise we log every 500 steps
)
zanqi commented 4 days ago

I am using these steps:

  1. clone https://github.com/zanqi/VILA/tree/finetune
  2. follow the "Installation" section of the README.md
  3. Run sh -x VILA/scripts/v1_5/ft/train_xyxy.slurm

The script on step three requires a dataset in LLaVA format: https://github.com/haotian-liu/LLaVA/blob/main/docs/Finetune_Custom_Data.md

I haven't pushed my dataset. This page has the steps to download one: https://wandb.ai/byyoung3/ml-news/reports/How-to-Fine-Tune-LLaVA-on-a-Custom-Dataset--Vmlldzo2NjUwNTc1

These three lines in train_xyxy.slurm should be changed to point to the dataset.

    --data_path ../LLaVA/armbench/train/dataset_xyxy_sorted.json \
    --validation_data_path ../LLaVA/armbench/validation/dataset_xyxy_sorted.json \
    --image_folder ../LLaVA/armbench/images/ \
zanqi commented 4 days ago

I found the issue comes from the deepspped zero3_offload.json file used by my command. It has these lines:

  "scheduler": {
    "type": "WarmupLR",
    "params": {
      "warmup_min_lr": "auto",
      "warmup_max_lr": "auto",
      "warmup_num_steps": "auto"
    }
  },

They override the scheduler type I set in the command line. Removing them seems to fix the problem. I don't know how deepspeed wrap arround huggingface trainer. If you have some info on this, it would be helpful for future reference.