zjunlp / EasyEdit

[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
https://zjunlp.github.io/project/KnowEdit
MIT License
1.77k stars 213 forks source link

R-ROME has poor performance when using GPT2-xl #328

Closed zhilu1 closed 2 months ago

zhilu1 commented 2 months ago

I am working on the wiki_counterfact dataset provided by KnowEdit and noticed that R-ROME's Edit Success rate is only about 0.67 when using the gpt2-xl model, which is significantly lower than that of the original ROME. The locality and portability of R-ROME is also much lower. However, when employing the llama-7b model, R-ROME outperforms ROME and its Edit Success rate reaches 0.98

The hyperparameters I used for R-ROME with the gpt2-xl model are an exact replica of those used by ROME with the same model. So I wonder if there is anything else I need to adjust to make R-ROME work in gpt2-xl?

XeeKee commented 2 months ago

Hello, thank you very much for your feedback. We have also identified this issue, and in my experiments, I found that GPT-2 XL also performs poorly on MEMIT. Adjusting the hyperparameters does not improve the results. I look forward to your feedback or further discussion.

zxlzr commented 2 months ago

Hi, maybe you can try MEMIT with other LLMs such as LLaMA, QWen. Do you have any further questions?