Closed Lut-hub closed 3 months ago
Thank you very much for your interest in EasyEdit. We apologize for our limited availability as we are currently busy with the nips submission deadline. We will focus on optimization after the deadline is over.
Thank you for your suggestion. I will modify the entire code to use tok.encode(xx, add_special_tokens=False)
to avoid adding unnecessary tokens.
Thanks for your reply 😊
Hello!
I came across a tiny issue while editing Llama2 using the PMET method, and I thought it might be worth mentioning: At line 44 of the file
easyeditor/models/pmet/compute_zs.py
:The LlamaTokenizer adds an additional "<s>" token to
request["target_new"]
. This will result in appending an additional "<s>" token to the last of query during subsequent processes, making it difficult for PMET to optimize zs effectively.For example, when we edit the object of "What is the native language of Christiane Cohendy?" to "German", the result is:
However, for MEMIT, it should be:
For the MEMIT method, there's the following code at line 43 of the file
easyeditor/models/memit/compute_z.py
:So, it would be better to add the aforementioned code at line 47 of the file
easyeditor/models/pmet/compute_zs.py
. 😊