Closed AKANKSHASINGH233 closed 1 month ago
Hi, it's hard to figure out what causes the issue with the given information.
You say that you are getting 62% instead of 80% on CIFAR-20 class unlearning with ViT (I assume you mean retain set accuracy). Our paper reports accuracies in the 80s range for RN CIFAR 20 class unlearning, not ViT (90s - see appendix of the arxiv paper).
You might be mixing up different scenarios or your ViT model has not reached the same accuracy level as the one we pretrained (just speculation at this point).
I'd suggest checking that your ViT accuracy matches or exceeds the one we report as baseline and then running a hyperparameter search for SSD in case your ViT training differs due to any reason (alpha can be quite sensitive but usually should not be a problem).
You can also check the code of others that have reproduced/shown the performance of SSD on various unlearning tasks: https://arxiv.org/pdf/2312.02052 (DUCK) https://arxiv.org/abs/2402.14015 (Corrective unlearning) https://arxiv.org/abs/2403.03218 (LLM unlearning)
Edit: Poster deleted the question this answers to
Hi, without knowing the class you are talking about I can't compare anything. Furthermore, forget set accuracy does not have to be 0 to be ideal. As we describe in our paper and many others do too (e.g., Chundawat, Tarun), over-forgetting by lowering Df accuracy below what retraining would yield is not ideal (Streisand effect).
Closing this as you deleted your question
Thanks for the great work. I tried to reproduce the retrained ViT model in CIFAR-20 class unlearning scenario but has encountered some problems..