Closed Wondersui closed 4 months ago
WARNING: tokenization mismatch: 542 vs. 533. (ignored) WARNING: tokenization mismatch: 295 vs. 291. (ignored) WARNING: tokenization mismatch: 378 vs. 372. (ignored) WARNING: tokenization mismatch: 193 vs. 187. (ignored) WARNING: tokenization mismatch: 543 vs. 534. (ignored) WARNING: tokenization mismatch: 148 vs. 144. (ignored) WARNING: tokenization mismatch: 561 vs. 552. (ignored) WARNING: tokenization mismatch: 430 vs. 426. (ignored)
when I try to finetune bunny-phi3,the warning will appear and I am not sure the warning whether affect the training result.
this is my training scripts:
#!/bin/bash MODEL_TYPE=phi-3 OUTPUT_DIR=bunny-lora-$MODEL_TYPE mkdir -p ./bunny/checkpoints-$MODEL_TYPE/$OUTPUT_DIR deepspeed bunny/train/train.py \ --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \ --deepspeed ./script/deepspeed/zero3.json \ --model_name_or_path ./bunny/base_model/Bunny-v1_0-4B \ --model_type $MODEL_TYPE \ --version bunny \ --data_path ./Bunny-v1_0-data/finetune/bunny_695k.json \ --image_folder ./bunny_data/Bunny-v1_0-data/finetune/images \ --vision_tower ./bunny/base_model/siglip-so400m-patch14-384 \ --mm_projector_type mlp2x_gelu \ --image_aspect_ratio pad \ --group_by_modality_length False \ --bf16 False \ --fp16 True \ --output_dir ./bunny/checkpoints-$MODEL_TYPE/$OUTPUT_DIR \ --num_train_epochs 1 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 500 \ --save_total_limit 1 \ --learning_rate 2e-4 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 False \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --lazy_preprocess True \ --report_to none | tee 2>&1
--version phi3
when I try to finetune bunny-phi3,the warning will appear and I am not sure the warning whether affect the training result.
this is my training scripts: