Closed daihuidai closed 17 hours ago
这个标签一般是可以忽略的 回答的差别大,是因为你没有正确的加载lora模型。
from peft import AutoPeftModelForCausalLM import torch import os from transformers import AutoTokenizer
path_to_adapter="/root/ld/ld_project/MiniCPM-V/finetune/output/output_minicpmv2_lora/checkpoint-10"
merge_path="/root/ld/ld_project/MiniCPM-V/finetune/output/output_minicpmv2_lora/merge"
if not os.path.exists(merge_path): os.makedirs(merge_path)
model = AutoPeftModelForCausalLM.from_pretrained(
path_to_adapter,
device_map="auto",
trust_remote_code=True
).eval()
vpm_resampler_embedtokens_weight = torch.load(f"{path_to_adapter}/vpm_resampler_embedtokens.pt")
model.load_state_dict(vpm_resampler_embedtokens_weight, strict=False) #此时模型已经可用,加载了vit的resample和llm的lora
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
你好,我在微调过程中前几次当验证完进行eval时出现了UserWarning: Could not find a config file in MiniCPM-V-2 - will assume that the vocabulary was not modified.这个报错,我查看了huggingface的issue中该问题的回答,回答说可以忽略,并且模型正常保存和验证,实际上我这边也是正常保存。但是我在验证时,模型的回答和我设置的标签相差较大,请问这个警告是否可以忽略。![QQ20240604-101616](https://github.com/OpenBMB/MiniCPM-V/assets/43409147/56c02998-4dbe-4bf0-849b-92b23ac9af20)
下面是我的推理代码请问是否正确呢:![QQ20240604-102244](https://github.com/OpenBMB/MiniCPM-V/assets/43409147/7f730a39-514d-434e-8dfa-962ac10423bc)
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response