TinyLLaVA / TinyLLaVA_Factory

A Framework of Small-scale Large Multimodal Models
https://arxiv.org/abs/2402.14289
Apache License 2.0
658 stars 68 forks source link

the loss is nan when pre-training tinyllama using share recipe #13

Open xushilin1 opened 8 months ago

xushilin1 commented 8 months ago

Here is my training script

deepspeed tinyllava/train/train.py \
    --deepspeed ./scripts/zero2.json \
    --model_name_or_path checkpoints/TinyLlama-1.1B-Chat-v1.0/ \
    --version plain \
    --data_path datasets/LLaVA-Pretrain/blip_laion_cc_sbu_558k.json \
    --image_folder datasets/LLaVA-Pretrain/images \
    --vision_tower checkpoints/clip-vit-large-patch14-336 \
    --pretrain_mm_mlp_adapter output/pretrain/llava-tinyllama-1.1b/mm_projector.bin \
    --mm_projector_type mlp2x_gelu \
    --tune_entire_model True \
    --tune_vit_from_layer 12 \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --bf16 True \
    --output_dir output/pretrain/llava-tinyllama-1.1b_share \
    --num_train_epochs 1 \
    --per_device_train_batch_size 32 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 24000 \
    --save_total_limit 1 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True
TuuSiwei commented 5 months ago

maybe you should try deepspeed with a lower version like 0.10?