model = PeftModel.from_pretrained(model, script_args.adapter_model_name)
model.eval()
keylist = [key for key, in model.base_model.model.named_modules() if "lora" not in key]
for key in key_list:
parent, target, target_name = model.base_model._get_submodules(key)
if isinstance(target, peft.tuners.lora.Linear):
bias = target.bias is not None
new_module = torch.nn.Linear(target.in_features, target.out_features, bias=bias)
model.base_model._replace_module(parent, target_name, new_module, target)
如下是一段在https://github.com/lvwerra/trl/blob/fc468e0f3582de1aacd071fceb24265c619a8ef5/examples/stack_llama/scripts/merge_peft_adapter.py中截取的代码
Load the Lora model
model = PeftModel.from_pretrained(model, script_args.adapter_model_name) model.eval()
keylist = [key for key, in model.base_model.model.named_modules() if "lora" not in key] for key in key_list: parent, target, target_name = model.base_model._get_submodules(key) if isinstance(target, peft.tuners.lora.Linear): bias = target.bias is not None new_module = torch.nn.Linear(target.in_features, target.out_features, bias=bias) model.base_model._replace_module(parent, target_name, new_module, target)
model = model.base_model.model
请问lora与原模型合并,lora是单独的参数还是原模型中存在的层中的参数?就是合并之后的模型参数量是两者之和还是就等于原模型的参数量。 如果是两者之和的话,合并模型就是可以在原模型的基础上不断地叠加lora吗? 如果是等于原模型的参数量的话,在合并了一个lora之后的模型,再进行lora合并的话,是不是就会严重损失之前那个lora的参数,影响之前那个任务tune出来的效果。