BAAI-DCAI / Bunny

A family of lightweight multimodal models.
Apache License 2.0
799 stars 61 forks source link

微调模型后启动web显示矩阵维度对不上 #100

Closed htesd closed 2 weeks ago

htesd commented 2 weeks ago

我尝试在自己的数据集上进行了微调,但是当我微调后不论是lora还是全量,在尝试输入图片的时候都出现了报错:


RuntimeError: mat1 and mat2 shapes cannot be multiplied (729x1152 and 3456x4096)


根据调用堆栈,可能是我的projector维度异常发生了改变 这是我的微调配置文件:

!/bin/bash

MODEL_TYPE=llama3-8b

PRETRAIN_DIR=bunny-$MODEL_TYPE-pretrain OUTPUT_DIR=bunny-lora-$MODEL_TYPE

mkdir -p ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR

deepspeed bunny/train/train.py \ --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \ --deepspeed ./script/deepspeed/zero3.json \ --model_name_or_path /home/iiap/大语言模型/Bunny-v1_1-Llama-3-8B-V \ --model_type $MODEL_TYPE \ --version llama \ --data_path /home/iiap/datasets/Bunny-v1_0-data/finetune/tdt.json \ --image_folder /home/iiap/datasets/Bunny-v1_0-data/finetune/images \ --vision_tower /home/iiap/基础模型/siglip-so400m-patch14-384 \ --mm_projector_type mlp2x_gelu \ --image_aspect_ratio pad \ --group_by_modality_length False \ --bf16 True \ --output_dir ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR \ --num_train_epochs 8 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 2 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 500 \ --save_total_limit 1 \ --learning_rate 2e-4 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --lazy_preprocess True \ --report_to none | tee 2>&1 ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR/log.txt

Isaachhh commented 2 weeks ago

Bunny-v1.1-Llama-3-8B-V is trained with S^2-Wrapper enabled.

So, --use_s2 True

htesd commented 2 weeks ago

非常感谢,添加参数后模型可以正常运行https://github.com/BAAI-DCAI/Bunny/issues/100#issuecomment-2205975845