BAAI-DCAI / Bunny

A family of lightweight multimodal models.
Apache License 2.0
799 stars 61 forks source link

Model only responds with fine-tuned answers #97

Open tamdan17 opened 2 weeks ago

tamdan17 commented 2 weeks ago

I am fine-tuning the Bunny-v1_1-Llama-3-8B-V model on a dataset with a task that requires answering "Yes" or "No". However, after fine-tuning, the model only responds with "Yes" or "No" even to questions that are of a different type from those in the fine-tuning dataset. I suspect there might be a mistake in my fine-tuning process; it seems like the code is training from scratch rather than fine-tuning from the pre-trained model. Could anyone help identify the issue? Many thanks.

Here's my fine-tuning lora script (finetune_lora.sh):

#!/bin/bash

MODEL_TYPE=llama3-8b

PRETRAIN_DIR=bunny-pretrain-llama3-8b-siglip-s2
OUTPUT_DIR=bunny-lora-$MODEL_TYPE-full
LAST_STEP_FILE=./checkpoints-$MODEL_TYPE/$OUTPUT_DIR/last_step.txt

mkdir -p ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR
LOG_FILE=./checkpoints-$MODEL_TYPE/$OUTPUT_DIR/log.txt

run_training() {
    deepspeed  bunny/train/train.py \
    --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \
    --deepspeed ./script/deepspeed/zero3.json \
    --model_name_or_path Bunny-v1_1-Llama-3-8B-V \
    --model_type $MODEL_TYPE \
    --version llama \
    --data_path finetune/dataset.json \
    --image_folder finetune/images_train_full \
    --vision_tower siglip-so400m-patch14-384\
    --mm_projector_type mlp2x_gelu \
    --image_aspect_ratio pad \
    --group_by_modality_length False \
    --bf16 True \
    --output_dir ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR \
    --num_train_epochs 1 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 500 \
    --save_total_limit 1 \
    --learning_rate 2e-4 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True \
    --report_to none | tee -a $LOG_FILE
}
run_training

After training, I merge the model using the following script:

python script/merge_lora_weights.py \
    --model-path checkpoints-llama3-8b/bunny-lora-llama3-8b-full\
    --model-base Bunny-v1_1-Llama-3-8B-V \
    --model-type llama3-8b \
    --save-model-path bunny-lora-llama3-8b-full-merged
Isaachhh commented 1 week ago

Bunny-v1.1-Llama-3-8B-V is trained with S^2-Wrapper enabled.

So, --use_s2 True