InternLM / xtuner

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
https://xtuner.readthedocs.io/zh-cn/latest/
Apache License 2.0
3.7k stars 299 forks source link

llava预训练报错RuntimeError: The size of tensor a (0) must match the size of tensor b (592) at non-singleton dimension 2 #834

Open rfvscj opened 1 month ago

rfvscj commented 1 month ago

复现方式

xtuner train llava_internlm2_chat_20b_clip_vit_large_p14_336_e1_gpu8_pretrain.py

配置文件

仅改动数据集及模型位置

运行日志

Map (num_proc=32): 100%|██████████████████████████████████████████████████████████████████████████████████| 15000/15000 [00:01<00:00, 14362.05 examples/s]
Map (num_proc=32): 100%|██████████████████████████████████████████████████████████████████████████████████| 15000/15000 [00:01<00:00, 11732.15 examples/s]
Filter (num_proc=32): 100%|███████████████████████████████████████████████████████████████████████████████| 15000/15000 [00:00<00:00, 17789.18 examples/s]
Map (num_proc=32): 100%|████████████████████████████████████████████████████████████████████████████████████| 15000/15000 [00:15<00:00, 951.24 examples/s]
Filter (num_proc=32): 100%|███████████████████████████████████████████████████████████████████████████████| 15000/15000 [00:01<00:00, 13716.77 examples/s]
Map (num_proc=32): 100%|████████████████████████████████████████████████████████████████████████████████████| 15000/15000 [00:15<00:00, 986.93 examples/s]
07/13 10:44:00 - mmengine - WARNING - Dataset LLaVADataset has no metainfo. ``dataset_meta`` in visualizer will be None.
07/13 10:44:00 - mmengine - INFO - Num train samples 15000
07/13 10:44:00 - mmengine - INFO - train example:
07/13 10:44:00 - mmengine - INFO - <s><|im_start|>user
<image>
Render a clear and concise summary of the photo.<|im_end|>
<|im_start|>assistant
select luxury furniture 3 - inch gel memory foam mattress topper<|im_end|>

07/13 10:44:00 - mmengine - INFO - before_train in EvaluateChatHook.
You are using an old version of the checkpointing format that is deprecated (We will also silently ignore `gradient_checkpointing_kwargs` in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method `_set_gradient_checkpointing` in your model.
torch.Size([1, 0, 8, 8, 128])
torch.Size([1, 0, 48, 128])
torch.Size([1, 0, 48, 128])
torch.Size([1, 48, 0, 128])
torch.Size([1, 48, 0, 128])
torch.Size([1, 1, 592, 128])
torch.Size([1, 1, 592, 128])
Traceback (most recent call last):
  File "/data/sx/xtuner/xtuner/tools/train.py", line 360, in <module>
    main()
  File "/data/sx/xtuner/xtuner/tools/train.py", line 356, in main
    runner.train()
  File "/usr/local/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1777, in train
    model = self.train_loop.run()  # type: ignore
  File "/usr/local/lib/python3.10/site-packages/mmengine/runner/loops.py", line 271, in run
    self.runner.call_hook('before_train')
  File "/usr/local/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1839, in call_hook
    getattr(hook, fn_name)(self, **kwargs)
  File "/data/sx/xtuner/xtuner/engine/hooks/evaluate_chat_hook.py", line 234, in before_train
    self._generate_samples(runner, max_new_tokens=50)
  File "/data/sx/xtuner/xtuner/engine/hooks/evaluate_chat_hook.py", line 220, in _generate_samples
    self._eval_images(runner, model, device, max_new_tokens,
  File "/data/sx/xtuner/xtuner/engine/hooks/evaluate_chat_hook.py", line 152, in _eval_images
    generation_output = model.generate(
  File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py", line 1914, in generate
    result = self._sample(
  File "/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py", line 2651, in _sample
    outputs = self(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/modeling_internlm2.py", line 1204, in forward
    outputs = self.model(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/modeling_internlm2.py", line 1004, in forward
    layer_outputs = decoder_layer(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/modeling_internlm2.py", line 738, in forward
    hidden_states, self_attn_weights, present_key_value = self.attention(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/sx/xtuner/xtuner/model/modules/dispatch/internlm2.py", line 107, in internlm2_attn_forward
    query_states, key_states = apply_rotary_pos_emb(query_states, key_states,
  File "/data/sx/xtuner/xtuner/model/modules/dispatch/internlm2.py", line 29, in apply_rotary_pos_emb
    q_embed = (q * cos) + (rotate_half(q) * sin)
RuntimeError: The size of tensor a (0) must match the size of tensor b (592) at non-singleton dimension 2
rfvscj commented 1 month ago

解决

仅针对于这个错误,发现是transformers版本问题,4.42.x报错,4.39.x没问题。

新问题

看到这里说明是版本问题 torch.Size([1, 1, 592, 128]) torch.Size([1, 48, 592, 128])
Traceback (most recent call last):
  File "/root/xtuner/xtuner/tools/train.py", line 360, in <module>
    main()
  File "/root/xtuner/xtuner/tools/train.py", line 356, in main
    runner.train()
  File "/usr/local/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1777, in train
    model = self.train_loop.run()  # type: ignore
  File "/usr/local/lib/python3.10/site-packages/mmengine/runner/loops.py", line 271, in run
    self.runner.call_hook('before_train')
  File "/usr/local/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1839, in call_hook
    getattr(hook, fn_name)(self, **kwargs)
  File "/root/xtuner/xtuner/engine/hooks/evaluate_chat_hook.py", line 234, in before_train
    self._generate_samples(runner, max_new_tokens=50)
  File "/root/xtuner/xtuner/engine/hooks/evaluate_chat_hook.py", line 220, in _generate_samples
    self._eval_images(runner, model, device, max_new_tokens,
  File "/root/xtuner/xtuner/engine/hooks/evaluate_chat_hook.py", line 152, in _eval_images
    generation_output = model.generate(
  File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py", line 1575, in generate
    result = self._sample(
  File "/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py", line 2697, in _sample
    outputs = self(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/internlm2-chat-20b/modeling_internlm2.py", line 1226, in forward
    logits = self.output(hidden_states)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16
hhaAndroid commented 1 month ago

internlm2 版本你是最新的吗?现在有不兼容问题,因为 internlm2 之前更新过一次。我建议你 llm 拉取最新的,xtuner 也换成最新的,应该就可以了

yinglang commented 1 month ago

Does there any for transformer >= 4.42.4?