even though, i can clearly see lora modules being injected into the base_model, the logits still remain the same.
I double checked the above argument by comparing the parameters of two models, by the following code:
flag = True
for p1, p2 in zip(base_model.parameters(), antiexpert_peft_model.parameters()):
if p1.data.ne(p2.data).sum() > 0:
flag = False
print (flag)
which gives me True as response. I'm confused as what's wrong in my implementation or was there any error while training.
I created a lora and tried to merge it with base model but somehow the new model and the original model is giving the same logits.
base_model is as follows:
and the lora_model is created by following code:
and is as follows:
even though, i can clearly see lora modules being injected into the base_model, the logits still remain the same.
I double checked the above argument by comparing the parameters of two models, by the following code:
which gives me True as response. I'm confused as what's wrong in my implementation or was there any error while training.