OPTML-Group / Unlearn-Saliency

[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
https://www.optml-group.com/posts/salun_iclr24
MIT License
90 stars 12 forks source link

Can not decrease the foreget acc #10

Closed shaaaaron closed 5 months ago

shaaaaron commented 5 months ago

Hello, thank you for your interesting work!

However, when I tried to replicate the classification task, I didn't achieve the performance mentioned in the paper. The final output was a retain acc: 99.8, and forget acc: 97.1, which is unexpected.

image

Could you please tell me if there is anything wrong with the command I ran?

CUDA_VISIBLE_DEVICES=2  python main_train.py    --arch resnet18 --dataset cifar10  --lr 0.1  --epochs 182 \
                                                --save_dir ./test --data ../../data/

CUDA_VISIBLE_DEVICES=2  python generate_mask.py --arch resnet18 --dataset cifar10 --save_dir ./test  \
                                                --mask ./test/0model_SA_best.pth.tar --unlearn GA --num_indexes_to_replace 4500  --unlearn_epochs 1 

CUDA_VISIBLE_DEVICES=2  python -u main_random.py --unlearn RL --unlearn_epochs 10 --unlearn_lr 6e-3 --num_indexes_to_replace 4500 \
                                                --mask ./test/0model_SA_best.pth.tar --save_dir ./test --path ./test/with_0.1.pt

I would also like to ask about the meaning of the final output terms 'correctness', 'confidence', 'entropy', 'm_entropy', and 'prob'. I couldn't find their detailed definitions in the code or the paper.

I would greatly appreciate any help you can provide!

a-F1 commented 5 months ago

Thank you for your attention and interest to our work! You have executed the command correctly! To calculate the Unlearning Accuracy (UA), you need to subtract the forget accuracy from 100.

MIA corresponds to SVC_MIA_forget_efficacy['confidence']. For a comprehensive clarification on MIA, please refer to Appendix C.3, which is accessible at the following link: https://arxiv.org/abs/2304.04934. In the results you provided, the value of MIA is somewhat high. I suggest you can slightly reduce unlearn_lr to obtain a smaller Average Gap with Retrain.

We will update the README ASAP to make it easier for more people to understand the metrics and replicate our results.

shaaaaron commented 5 months ago

Thank you for your explanation!

NilakshanKunananthaseelan commented 3 months ago

Hi, Thanks for your work.

I'm a bit confused about the UA metric. I got an approximately 96.6 forget set accuracy.

  1. Why is it high? Ideally, the unlearned model should have a forget set accuracy of 0.
  2. What is the intuition behind subtracting the forget set accuracy from 100 to get Unlearning Accuracy? image
a-F1 commented 3 months ago

Thank you for your interest in our work.

We are always happy to address any additional queries or concerns you may have. Please feel free to contact us whenever necessary.

NilakshanKunananthaseelan commented 3 months ago

Thanks, this makes more sense now.