[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
I read your paper and am currently trying to reproduce the experimental results. However, I have some questions, so I’m posting this issue.
In the paper regarding the DDPM class-wise unlearning experiments, the appendix mentions that 80,000 iterations are used during retraining. When following the sequence of model training → mask generation → unlearning, how many iterations are used for the initial model training? Additionally, in the configs/cifar10_train.yml file, the n_iters in the training section is set to 800,000. Is this correct?
I read your paper and am currently trying to reproduce the experimental results. However, I have some questions, so I’m posting this issue.
In the paper regarding the DDPM class-wise unlearning experiments, the appendix mentions that 80,000 iterations are used during retraining. When following the sequence of model training → mask generation → unlearning, how many iterations are used for the initial model training? Additionally, in the configs/cifar10_train.yml file, the n_iters in the training section is set to 800,000. Is this correct?
Thank you for your assistance!