OpenGVLab / InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
https://internvl.readthedocs.io/en/latest/
MIT License
5.96k stars 462 forks source link

[Bug] Weight key issue when using lora fine-tuning(Already fixed) #593

Open Lillianwei-h opened 1 month ago

Lillianwei-h commented 1 month ago

Checklist

Describe the bug

Issue

Due to the use of PEFT, the key names of the saved weights after lora training are inconsistent with the original ones, where language.model becomes language.base_model.model.

Fix

Before saving the weights at the end of training, I use model.language_model = model.language_model.merge_and_unload() and everything looks fine. I hope you can add this in future updates~

Reproduction

Already fixed

Environment

Already fixed

Error traceback

No response

qishisuren123 commented 1 month ago

We appreciate you bringing this issue to our attention. We will conduct a thorough investigation and provide an update as soon as possible. Should we identify a bug, we will implement the necessary code changes. Thank you for your continued support.