if-loops / selective-synaptic-dampening

Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening
MIT License
36 stars 8 forks source link

Cannot reproduce the results of retrained model #1

Closed skynbe closed 4 months ago

skynbe commented 4 months ago

Thanks for the great work. I tried to reproduce the retrained ResNet-18 model in CIFAR-20 class unlearning scenario but has encountered some problems.

  1. Parameters reset Resetting the parameters does not work well. Checking the parameters, I found that this resets the last linear layer only. Does it intend to reset the linear layer only or do I missed something?

  2. Cannot reproduce the results with parameter reset Table 3 in the paper presents that Acc(D_r) of retrained model is 82.11 (for unlearning ‘vehicle2’ class). I manually re-initialize all layers (by modifying the above reset code) and learn the retrained model, but it only achieves 72~74 on the retain set.

Could you please check this and let me know how to reproduce?

if-loops commented 4 months ago

Hi, you might not be training the baseline model for enough epochs to achieve the same results. Resetting only affects torch layers with the "reset_parameters" attribute. You should not have to change anything in the code there.

If the issues persist feel free to send us more details but it should work fine (we know from others that used the code in their own papers).

skynbe commented 4 months ago

Thanks for the quick reply. I first achieved the reported 82% accuracy for baseline. As also noted in https://discuss.pytorch.org/t/how-to-reset-parameters-of-layer/120782/2, layer1~layer4 of ResNet does not have reset_parameters attribute, so all convolutional layers within each block does not affect by reset. (Only the conv1 and fc layers are affected.)

When I simply run the code (without changing anything) then I achieve 82%, which seems to be okay, but this is because reset is not working. When I manually re-initialize all layers, then it gives 72~74 accuracy. I would like you to check this for once.

Could you also let me know which pytorch version are you using? (in case the version issue)

if-loops commented 4 months ago

Good spot, the "damage" done by resetting the available layers is still enough as you can see when you compare it to the realative Df & MIA values of the ViT as well as other papers using ResNet18 in unlearning (e.g., Chundawat Bad Teacher). You'll just have to retrain the fully destroyed model for longer to achieve the same accuracy again. Thanks for pointing it out.