allenai / open-instruct

Apache License 2.0
1.19k stars 158 forks source link

About the training seed #188

Open lucasliunju opened 1 month ago

lucasliunju commented 1 month ago

Hi, thanks for your great repo.

I am trying to use this code to fune-tune llama2-7b on tulu-v2. And I find we always get the same loss curve when I use different seed. I guess this is because the data is not shuffled or use the same seed to shuffle. Could you help me check it. I try to change the set_seed but it does not work.

lucasliunju commented 1 month ago

Hi, thanks for your great repo. Could you please provide the training command about lora on tulu-v2?

hamishivi commented 1 month ago

Hi, we recommend using the script finetune_with_accelerate.sh, which looks like this:

MODEL_SIZE=7B
NUM_GPUS=8
BATCH_SIZE_PER_GPU=1
TOTAL_BATCH_SIZE=128
GRADIENT_ACC_STEPS=$(($TOTAL_BATCH_SIZE/$NUM_GPUS/$BATCH_SIZE_PER_GPU))
echo "Training llama model ${MODEL_SIZE} using $NUM_GPUS GPUs, $BATCH_SIZE_PER_GPU batch size per GPU, $GRADIENT_ACC_STEPS gradient accumulation steps"

# You can also set --gradient_checkpointing or use `stage3_offloading_accelerate.conf` to save memory, 
# but it will trade off speed.
accelerate launch \
    --mixed_precision bf16 \
    --num_machines 1 \
    --num_processes $NUM_GPUS \
    --use_deepspeed \
    --deepspeed_config_file configs/ds_configs/stage3_no_offloading_accelerate.conf \
    open_instruct/finetune.py \
    --model_name_or_path meta-llama/Llama-2-7b-hf \
    --use_flash_attn \
    --tokenizer_name meta-llama/Llama-2-7b-hf \
    --use_slow_tokenizer \
    --dataset_name allenai/tulu-v2-sft-mixture \
    --max_seq_length 8192 \
    --preprocessing_num_workers 128 \
    --per_device_train_batch_size $BATCH_SIZE_PER_GPU \
    --gradient_accumulation_steps $GRADIENT_ACC_STEPS \
    --learning_rate 2e-5 \
    --reduce_sum loss \
    --lr_scheduler_type linear \
    --warmup_ratio 0.03 \
    --weight_decay 0. \
    --num_train_epochs 2 \
    --output_dir output_dir \
    --with_tracking \
    --report_to tensorboard \
    --logging_steps 1

This should get similar scores to our officially release model, although not identical, as we used a different codebase and TPUs for those original checkpoints (https://github.com/hamishivi/EasyLM).

Seeing the same training curve is odd, since the DataLoader is definitely supposed to shuffle the data (https://github.com/allenai/open-instruct/blob/main/open_instruct/finetune.py#L448). It might be good to validate that the dataloader is indeed always giving the same samples even with different seeds.