Open zoharli opened 5 hours ago
Thank you for your comments! We plan to release it later as we are still cleaning up the codes for the baselines right now. However, since we have already provided the code scripts for fine-tuning, it is quite simple to implement the five unlearning algorithms actually by yourself! For example, you can just flip the sign when calculating the sign to make it GA, and you may find NPO code here. For GA Difference, you apply GA on the forget dataset and GD on the retain dataset. Lastly, for KL Min, you only need one vanilla model and use F.kl_div
to calculate the KL divergence loss between the two models.
Thanks for the great work! Will the code of the unlearning part, i.e. the five unlearning algorithms tested in the paper be open-sourced?