Open Haochen-Wang409 opened 5 days ago
#!/bin/bash
nvidia-smi --query-gpu=gpu_name --format=csv,noheader | wc -l
python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 --master_port=20001 llava/train/train_mem.py \
--model_name_or_path Qwen/Qwen2-1.5B-Instruct \
--version v1 \
--data_path ./playground/llava_images/llava_v1_5_mix665k.json \
--image_folder ./playground/llava_images \
--vision_tower ./checkpoints/clip-vit-large-patch14-336 \
--pretrain_mm_mlp_adapter ./checkpoints/Qwen2-1.5B-Instruct-pretrain/mm_projector.bin \
--mm_projector_type mlp2x_gelu \
--mm_vision_select_layer -2 \
--mm_use_im_start_end False \
--mm_use_im_patch_token False \
--image_aspect_ratio pad \
--group_by_modality_length True \
--bf16 True \
--output_dir ./checkpoints/Qwen2-1.5B-Instruct-Vision \
--num_train_epochs 1 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 16 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--dataloader_num_workers 4 \
--lazy_preprocess True \
# --report_to wandb
# --deepspeed ./scripts/zero2.json \
这个是我现在微调的脚本,但是因为作者并没有适配qwen,我最近又才刚开始调试,所以在finetune上面还有点bug
感谢您非常及时的回复!
请问你最近也是想在qwen复现llava吗,可以多交流交流~
是的,我想用Qwen2替换原来的LLM。我注意到它们的tokenizer有一些区别,似乎是得改conversation template.
您好,刚刚我又debug了一下,现在在qwen2上pretrain和finetune都可以了
Question
I wonder about the performance when using Qwen2 as the LLM. Does it outperform the original LLaVA-v1.5?
By the way, are there any scripts for instruction tuning? I only found the script for pertaining (pretrain_qwen2.sh).