InternLM / InternLM-XComposer

InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) excelling in free-form text-image composition and comprehension.
1.92k stars 121 forks source link

RuntimeError: expected scalar type Float but found BFloat16 #245

Open TankNee opened 2 months ago

TankNee commented 2 months ago

问题描述:

在微调InternLM XComposer过程中,遇到了如下报错信息。报错指出期望的标量类型为Float,但实际找到的是BFloat16。不使用DeepSpeed直接使用python运行无报错

报错位置及代码片段:

{'loss': 14.9467, 'grad_norm': 94.78041076660156, 'learning_rate': 7.692307692307693e-05, 'epoch': 0.0}                                                                                  
{'loss': 15.5564, 'grad_norm': 89.43864440917969, 'learning_rate': 0.00015384615384615385, 'epoch': 0.0}                                                                                 
  0%|▏                                                                                                                                               | 2/1250 [01:05<11:16:26, 32.52s/it]Traceback (most recent call last):
  File "/new_disk/tanknee/CodeRepo/tl/visual-instruction/finetune_internlm.py", line 463, in <module>
    train()
  File "/new_disk/tanknee/CodeRepo/tl/visual-instruction/finetune_internlm.py", line 454, in train
    trainer.train()
  File "/new_disk/tanknee/CodeRepo/Packages/transformers/src/transformers/trainer.py", line 1624, in train
    return inner_training_loop(
  File "/new_disk/tanknee/CodeRepo/Packages/transformers/src/transformers/trainer.py", line 1961, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/new_disk/tanknee/CodeRepo/Packages/transformers/src/transformers/trainer.py", line 2902, in training_step
    loss = self.compute_loss(model, inputs)
  File "/new_disk/tanknee/CodeRepo/Packages/transformers/src/transformers/trainer.py", line 2925, in compute_loss
    outputs = model(**inputs)
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
    loss = self.module(*inputs, **kwargs)
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/new_disk/tanknee/CodeRepo/Spatial-Model/SpatialModel/models/v2/intern/modeling_internlm_xcomposer2.py", line 439, in forward
    outputs = self.model(
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/new_disk/tanknee/CodeRepo/Spatial-Model/SpatialModel/models/v2/intern/modeling_internlm2.py", line 921, in forward
    layer_outputs = torch.utils.checkpoint.checkpoint(
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint
    return CheckpointFunction.apply(function, preserve, *args)
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 107, in forward
    outputs = run_function(*args)
  File "/new_disk/tanknee/CodeRepo/Spatial-Model/SpatialModel/models/v2/intern/modeling_internlm2.py", line 916, in custom_forward
    return module(*inputs, output_attentions, None,
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/new_disk/tanknee/CodeRepo/Spatial-Model/SpatialModel/models/v2/intern/modeling_internlm2.py", line 625, in forward
    hidden_states, self_attn_weights, present_key_value = self.attention(
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/new_disk/tanknee/CodeRepo/Spatial-Model/SpatialModel/models/v2/intern/modeling_internlm2.py", line 391, in forward
    qkv_states = self.wqkv(hidden_states, im_mask)
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/new_disk/tanknee/CodeRepo/Spatial-Model/SpatialModel/models/v2/intern/build_mlp.py", line 206, in forward
    res = super().forward(x)
  File "/new_disk/tanknee/anaconda3/envs/spatial/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Float but found BFloat16
TankNee commented 2 months ago

我的bash脚本如下:

DISTRIBUTED_ARGS="
    --nproc_per_node $GPUS_PER_NODE \
    --nnodes $NNODES \
    --node_rank $NODE_RANK \
    --master_addr $MASTER_ADDR \
    --master_port $MASTER_PORT
"

torchrun $DISTRIBUTED_ARGS finetune_internlm.py \
    --model_name_or_path $MODEL \
    --data_path $DATA \
    --img_size 490 \
    --given_num True \
    --bf16 True \
    --fix_vit True \
    --fix_sampler False \
    --use_lora False \
    --output_dir output_qwen/interlm_s1-3 \
    --num_train_epochs 1 \
    --batch_size 1 \
    --per_device_train_batch_size 2 \
    --per_device_eval_batch_size 1 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 500 \
    --save_total_limit 1 \
    --learning_rate 1e-3 \
    --weight_decay 0.1 \
    --adam_beta2 0.95 \
    --warmup_ratio 0.01 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --report_to "none" \
    --max_length 4096 \
    --gradient_checkpointing True \
    --deepspeed scripts/intern_ds2.json
TankNee commented 2 months ago

并且不是每个样本都有这个情况,甚至可以跑几个step再报错

{'loss': 14.9467, 'grad_norm': 94.78041076660156, 'learning_rate': 7.692307692307693e-05, 'epoch': 0.0}                                                                                  
{'loss': 15.5564, 'grad_norm': 89.43864440917969, 'learning_rate': 0.00015384615384615385, 'epoch': 0.0}                                                                                 
  0%|▏                                                                                                                                               | 2/1250 [01:05<11:16:26, 32.52s/it]