Hi, thanks for your work. I also agree with the value of LLM unlearning in privacy-preserving since I also work on the unlearning of Federated Learning scenario. However, I just tested a simple case and used the original facebook/opt-1.3b model as follows:
The output is already whitespace, which makes me doubt the true effect of the proposed method.
Hope to hear further explanation from you. Thanks!
Hi, thanks for your work. I also agree with the value of LLM unlearning in privacy-preserving since I also work on the unlearning of Federated Learning scenario. However, I just tested a simple case and used the original
facebook/opt-1.3b
model as follows:The output is already whitespace, which makes me doubt the true effect of the proposed method.
Hope to hear further explanation from you. Thanks!