OpenBMB / MiniCPM-V

MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
Apache License 2.0
7.86k stars 547 forks source link

[BUG] <title>全参微调时的总batchsize是多少? #249

Closed aoji0606 closed 2 weeks ago

aoji0606 commented 3 weeks ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

aoji0606 commented 3 weeks ago

我用自己的数据去ft,执行的finetune_ds.sh,设置的总bs=8*16,lr=1e-6,训完loss在0.8713,最后测试mme效果非常差,这是什么原因呢

qyc-98 commented 3 weeks ago

您的数据集和mme的差别有多大呢,您有测试过其他indomain数据集吗?我这里用单卡 batch为16,在refcoco上训练,最起码是有效果的。