Closed blackDZS closed 1 week ago
I run the script as below, and meet Imbalanced and Unstable GPU Usage during the training problem
Imbalanced and Unstable GPU Usage during the training
export OMP_NUM_THREADS=8 export NCCL_IB_DISABLE=0 export NCCL_IB_GID_INDEX=3 export NCCL_SOCKET_IFNAME=ens19np0 export NCCL_DEBUG=INFO export NUM_GPUS=8 export NNODES=1 export RANK=0 export ADDR="localhost" export PORT="29500" export PYTHONPATH=$(pwd) LLM_VERSION="/data/tbsi/model_weights/Qwen/Qwen2.5-7B-Instruct" LLM_VERSION_CLEAN="${LLM_VERSION//\//_}" VISION_MODEL_VERSION="/data/tbsi/model_weights/clip-vit-large-patch14" VISION_MODEL_VERSION_CLEAN="${VISION_MODEL_VERSION//\//_}" DATA_ROOT="/data/tbsi/datasets/multimodal/LLaVA-NeXT-Data" PROJECTOR_NAME="llavanext-_data_tbsi_model_weights_clip-vit-large-patch14-_data_tbsi_model_weights_Qwen_Qwen2.5-7B-Instruct-mlp2x_gelu-pretrain_blip558k_plain" PROMPT_VERSION="qwen_1_5" BASE_RUN_NAME="llavanext-${VISION_MODEL_VERSION_CLEAN}-${LLM_VERSION_CLEAN}-mlp2x_gelu-pretrain_blip558k-finetune_llavanext780k" echo "BASE_RUN_NAME: ${BASE_RUN_NAME}" ACCELERATE_CPU_AFFINITY=1 torchrun --nproc_per_node="${NUM_GPUS}" --nnodes="${NNODES}" --node_rank="${RANK}" --master_addr="${ADDR}" --master_port="${PORT}" \ llava/train/train_mem.py \ --deepspeed scripts/zero3.json \ --model_name_or_path ${LLM_VERSION} \ --version ${PROMPT_VERSION} \ --data_path ${DATA_ROOT}/llava_next_raw_format/llava_next_raw_format_processed.json \ --image_folder ${DATA_ROOT}/llava_next_raw_format \ --pretrain_mm_mlp_adapter /home/lanyun/project/train/unicom/checkpoints/projectors/${PROJECTOR_NAME}/mm_projector.bin \ --mm_tunable_parts mm_vision_tower,mm_mlp_adapter,mm_language_model \ --mm_vision_tower_lr 2e-6 \ --vision_tower ${VISION_MODEL_VERSION} \ --mm_projector_type mlp2x_gelu \ --mm_vision_select_layer -2 \ --mm_use_im_start_end False \ --mm_use_im_patch_token False \ --group_by_modality_length True \ --image_aspect_ratio anyres \ --image_grid_pinpoints "[(336, 672), (672, 336), (672, 672), (1008, 336), (336, 1008)]" \ --mm_patch_merge_type spatial_unpad \ --bf16 True \ --run_name $BASE_RUN_NAME \ --output_dir "./checkpoints/${BASE_RUN_NAME}" \ --num_train_epochs 1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 2 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 3000 \ --save_total_limit 1 \ --learning_rate 1e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 32768 \ --gradient_checkpointing True \ --dataloader_num_workers 16 \ --lazy_preprocess True \ --report_to wandb \ --torch_compile True \ --torch_compile_backend "inductor" \ --dataloader_drop_last True
The answer of this prob is different token length in global batch. We will fix this problem in next version.
PLZ take care of this repo.
I run the script as below, and meet
Imbalanced and Unstable GPU Usage during the training
problem