Closed shaaaaron closed 5 months ago
Thank you for your attention and interest to our work! You have executed the command correctly! To calculate the Unlearning Accuracy (UA), you need to subtract the forget accuracy from 100.
MIA corresponds to SVC_MIA_forget_efficacy['confidence']. For a comprehensive clarification on MIA, please refer to Appendix C.3, which is accessible at the following link: https://arxiv.org/abs/2304.04934. In the results you provided, the value of MIA is somewhat high. I suggest you can slightly reduce unlearn_lr to obtain a smaller Average Gap with Retrain.
We will update the README ASAP to make it easier for more people to understand the metrics and replicate our results.
Thank you for your explanation!
Hi, Thanks for your work.
I'm a bit confused about the UA metric. I got an approximately 96.6 forget set accuracy.
Thank you for your interest in our work.
A1: Random data forgetting randomly selects 10% data from the entire dataset for unlearning. The forgetting dataset includes data from all classes in the dataset. This is different from class-wise forgetting, which only selects one class data for forgetting, resulting in the forgetting set accuracy close to zero after forgetting.
A2: Unlearning Accuracy is a measure of unlearning efficacy that determines whether a model has truly unlearned the forgetting data. In our latest work (https://arxiv.org/abs/2403.07362), we found that for a specific forgetting set, UA is 0. If you are interested, you can also follow our latest work.
We are always happy to address any additional queries or concerns you may have. Please feel free to contact us whenever necessary.
Thanks, this makes more sense now.
Hello, thank you for your interesting work!
However, when I tried to replicate the classification task, I didn't achieve the performance mentioned in the paper. The final output was a retain acc: 99.8, and forget acc: 97.1, which is unexpected.
Could you please tell me if there is anything wrong with the command I ran?
I would also like to ask about the meaning of the final output terms 'correctness', 'confidence', 'entropy', 'm_entropy', and 'prob'. I couldn't find their detailed definitions in the code or the paper.
I would greatly appreciate any help you can provide!