haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.12k stars 2.1k forks source link

finetune Always stuck #868

Open cxl-ustb opened 9 months ago

cxl-ustb commented 9 months ago

Question

Thank you for your work.I used 8xv100 32gb,94 cpu and 364gb memory.

!/bin/bash

################## VICUNA ################## PROMPT_VERSION=v1 MODEL_VERSION="vicuna-v1-3-7b" ################## VICUNA ##################

deepspeed llava/train/train_mem.py \ --deepspeed ./scripts/zero2.json \ --model_name_or_path ./checkpoints/$MODEL_VERSION \ --version $PROMPT_VERSION \ --data_path /mnt/bd/demo/cxl/LLaVA/data/llava_instruct_80k.json \ --image_folder /mnt/bd/demo/cxl/LLaVA/data/train2017 \ --vision_tower openai/clip-vit-large-patch14 \ --pretrain_mm_mlp_adapter ./checkpoints/llava-$MODEL_VERSION-pretrain/mm_projector.bin \ --mm_vision_select_layer -2 \ --mm_use_im_start_end False \ --mm_use_im_patch_token False \ --bf16 False \ --output_dir ./checkpoints/llava-$MODEL_VERSION-finetune \ --num_train_epochs 1 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 50000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 False \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 1 \ --lazy_preprocess False \ --report_to wandb 20231128-090552

loveunk commented 8 months ago

same issue here, not yet fixed

loveunk commented 8 months ago

https://github.com/huggingface/transformers/issues/28280