Open YuRuolalala opened 4 months ago
Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2 and help each other,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == liangmeng89@zju.edu.cn.
I have modified the script as follows:
(damo) [root@bgmm01 custom]# cat finetune.sh
!/bin/bash
Environment Variables
export CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=${1:-1} NPROC_PER_NODE=${2:-1} # Only use 1 process per node MASTER_ADDR="127.0.0.1" MASTER_PORT=16666 RANK=0
Training Arguments
GLOBAL_BATCH_SIZE=128 LOCAL_BATCH_SIZE=4 GRADIENT_ACCUMULATION_STEPS=$[$GLOBAL_BATCH_SIZE/($WORLD_SIZE$NPROC_PER_NODE$LOCAL_BATCH_SIZE)]
Log Arguments
export TRANSFORMERS_OFFLINE=1 export WANDB_PROJECT=videollama2_vllava RUN_NAME=videollama2_vllava DATA_DIR=datasets OUTP_DIR=/data/VideoLLaMA2/work_dirs
torchrun --nnodes $WORLD_SIZE \ --nproc_per_node $NPROC_PER_NODE \ --master_addr=$MASTER_ADDR \ --master_port=$MASTER_PORT \ --node_rank $RANK \ /data/VideoLLaMA2/videollama2/train_flash_attn.py \ --deepspeed /data/VideoLLaMA2/scripts/zero3.json \ --version v1_mistral \ --vision_tower openai/clip-vit-large-patch14-336 \ --mm_projector_type stc_connector \ --model_name_or_path /data/VideoLLaMA2/Mistral-7B-Instruct-v0.2/Mistral-7B-Instruct-v0.2 \ --data_path /data/VideoLLaMA2/datasets/custom_sft/custom.json \ --data_folder /data/VideoLLaMA2/datasets/custom_sft/ \ --freeze_backbone True \ --pretrain_mm_mlp_adapter /data/VideoLLaMA2/model--base/mm_projector.bin \ --mm_vision_select_layer -2 \ --mm_use_im_start_end False \ --mm_use_im_patch_token False \ --image_aspect_ratio pad \ --num_frames 8 \ --bf16 True \ --tf32 True \ --fp16 False \ --output_dir ${OUTP_DIR}/${WANDBPROJECT}/finetune${RUN_NAME} \ --num_train_epochs 1 \ --per_device_train_batch_size $LOCAL_BATCH_SIZE \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps $GRADIENT_ACCUMULATION_STEPS \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 500 \ --save_total_limit 99 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --report_to tensorboard \ --run_name $RUN_NAME
Generally, isn't it done by specifying GPUs with export CUDA_VISIBLE_DEVICES=4,5?
I have monitored, and these two GPUs are indeed almost completely idle, but when I run it, it prompts an error saying insufficient memory and it keeps using a GPU that is already running other models. How can this situation be resolved?