Closed nixsui closed 7 months ago
deepspeed --include localhost:1 --master_port 29597 llava/train/train_mem.py \ --deepspeed ./scripts/zero3.json \ --model_name_or_path /ssd1/suixin02/data/exp/llava/liuhaotian/llava-v1.6-vicuna-7b \ --version v1 \ --data_path /ssd1/suixin02/data/exp/llava/liuhaotian/LLaVA-Instruct-150K/chinese_and_original.json \ --image_folder /ssd1/suixin02/data/exp/llava/liuhaotian/LLaVA-Instruct-150K \ --vision_tower openai/clip-vit-large-patch14-336 \ --mm_projector_type mlp2x_gelu \ --mm_vision_select_layer -2 \ --mm_use_im_start_end False \ --mm_use_im_patch_token False \ --image_aspect_ratio pad \ --group_by_modality_length True \ --bf16 True \ --output_dir /ssd1/suixin02/data/exp/llava/checkpoints/llava-v1.5-34b-task-lora \ --num_train_epochs 1 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 16 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 50000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --lazy_preprocess True \ --report_to wandb
I am facing the same issue. Did u know how to solve it? Thanks in advance!
When i finetune LLaVA-1.5-7B in 4 3090-24GB,I meet the same probelm. How did you solve it? Thanks.
Question
deepspeed --include localhost:1 --master_port 29597 llava/train/train_mem.py \ --deepspeed ./scripts/zero3.json \ --model_name_or_path /ssd1/suixin02/data/exp/llava/liuhaotian/llava-v1.6-vicuna-7b \ --version v1 \ --data_path /ssd1/suixin02/data/exp/llava/liuhaotian/LLaVA-Instruct-150K/chinese_and_original.json \ --image_folder /ssd1/suixin02/data/exp/llava/liuhaotian/LLaVA-Instruct-150K \ --vision_tower openai/clip-vit-large-patch14-336 \ --mm_projector_type mlp2x_gelu \ --mm_vision_select_layer -2 \ --mm_use_im_start_end False \ --mm_use_im_patch_token False \ --image_aspect_ratio pad \ --group_by_modality_length True \ --bf16 True \ --output_dir /ssd1/suixin02/data/exp/llava/checkpoints/llava-v1.5-34b-task-lora \ --num_train_epochs 1 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 16 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 50000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --lazy_preprocess True \ --report_to wandb