Closed Tot-ziens closed 3 weeks ago
Thank you very much for your attention to EasyEdit. We will fix this bug in the near future.
Since there are two editing scenarios:
Single editing (sequential_edit=False
): In this case, it is necessary to perform parameter rollback after each edit, which means discarding the adapter weights as you mentioned. When editing a new example, the LoRA module will be added again. This means that the returned edited_model is exactly the same as the original model, only the performance of single editing is evaluated, resulting in no changes before and after the output as you mentioned.
Continuous editing (sequential_edit=True
): In this case, parameter rollback is not performed after each edit, and the adapter weights are retained continuously. If you wish to test generation, you can use continuous editing to ensure that the returned model includes the LoRA weights.
There doesn't seem to be any bug in the code. I hope this answers your question.
There doesn't seem to be any bug in the code. I hope this answers your question.
Thanks a lot. I have found how this parameter works.
LoRA editing does not work, the pre-edit outputs:![Snipaste_2024-06-26_20-18-58](https://github.com/zjunlp/EasyEdit/assets/55312666/bbaeba51-38af-49af-ae4d-c331ef969cba)
and post-edit outputs:![Snipaste_2024-06-26_20-15-01](https://github.com/zjunlp/EasyEdit/assets/55312666/fcc24bc1-d606-43c3-b376-47fc6147882f)
https://github.com/zjunlp/EasyEdit/commit/d62ae568c87144384e5d94b4f8a75ad26a1083fa#diff-35c40720142e2be3428100729e86956ac7c6725491150b62159171bf830946f9
In this Refactor and in line 333 and 338 in editor.py, you deleted keep_original_weight paramter, which caused the model edit based on LoRA to delete the adapter weights and only keep the base model. Is this a bug, or am I using LoRA the wrong way?