OpenBMB / MiniCPM-V

MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
Apache License 2.0
7.98k stars 558 forks source link

finetune代码中patch_attn_mask是不是写错了,运行后全为True了 #322

Closed shidingz closed 11 hours ago

shidingz commented 1 week ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/blob/45387f99a455e11801b78a0b24811856688e0c8b/modeling_minicpmv.py#L97 95行可以看到 patch_attn_mask = torch.zeros((B, 1, max_patches), dtype=torch.bool, device=device) patch_attn_mask的维度是(B, 1, max_patches),但是在判断是否padding的时候在dim=1的维度进行了填充,导致了运行后patch_attn_mask的值全为True for i in range(B):   patch_attn_mask[i, :tgt_sizes[i][0] tgt_sizes[i][1]] = True 正确的写法应该是这样吧 for i in range(B): patch_attn_mask[i, 0, :tgt_sizes[i][0] tgt_sizes[i][1]] = True 你们忽略了中间的维度。

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

eternalding commented 1 week ago

I think this is the same issue as reported in #274