Open AvivSham opened 1 month ago
Hi, thanks for your interest! Unfortunately, it seems that the original models have been deleted from my device. There is another issue #1 in which you could pay attention. As detailed HP settings are provided, we would love to but have no ideas on how to provide further instructions. It would be fine as long as the results do not deviate too far due to randomness.
@zhangbinchi
We attempted to follow the instructions in the README file and the HPs in the paper to reproduce the results, but we were unsuccessful. Specifically, we wanted to reproduce the All CNN Cifar10 results. Therefore, we trained a model using the following command:
python train.py --lr 1e-3 --weight-decay 5e-4 --epochs 50 --C 20 --model cnn --batch-size 128 --dataset cifar10
Then we ran the unlearning process with the following:
python unlearn.py --lr 1e-3 --weight-decay 5e-4 --epochs 50 --C 20 --model cnn --batch-size 128 --dataset cifar10 --std 1e-3 --gamma 200 --scale 20000
Finally, the results we got by running the test_unlearn
script are:
Original Model===
Unlearn Loss: 0.2989, Unlearn Accuracy: 90.80%, Unlearn Micro F1: 90.80%
Residual Loss: 0.2961, Residual Accuracy: 91.10%, Residual Micro F1: 91.10%
Test Loss: 0.5068, Test Accuracy: 83.00%, Test Micro F1: 83.00%
Unlearn Model===
Unlearn Loss: 0.2823, Unlearn Accuracy: 91.70%, Unlearn Micro F1: 91.70%
Residual Loss: 0.2835, Residual Accuracy: 91.51%, Residual Micro F1: 91.51%
Test Loss: 0.4964, Test Accuracy: 83.44%, Test Micro F1: 83.44%
As you can see we were not able to reproduce the results of the original model but worse than that, the unlearning process made the results on the unlearning set even better which does not make any sense.
Can you please guide us on how to replicate the results?
Hi, following your steps directly, the results from my end is
Original Model===
Unlearn Loss: 0.2948, Unlearn Accuracy: 91.80%, Unlearn Micro F1: 91.80%
Residual Loss: 0.2854, Residual Accuracy: 91.42%, Residual Micro F1: 91.42%
Test Loss: 0.5090, Test Accuracy: 82.94%, Test Micro F1: 82.94%
Unlearn Model===
Unlearn Loss: 0.2891, Unlearn Accuracy: 91.60%, Unlearn Micro F1: 91.60%
Residual Loss: 0.2742, Residual Accuracy: 91.85%, Residual Micro F1: 91.85%
Test Loss: 0.4978, Test Accuracy: 82.96%, Test Micro F1: 82.96%
It is worth noting that the choice of random seeds really makes a difference as both original training and unlearning (including the noise added) can be affected by it. Maybe you could try more random seeds and see how our method performs averagely. Hope this helps.
Hi @zhangbinchi , How are you? Thank you for your contribution and incredible work! We tried to reproduce the results reported in table 1 for both CIFAR10 and SVHN without success (although running with the HPs mentioned in the paper). Could you please provide the pre-trained models and detailed instructions on reproducing the results using the
unlearn.py
script?Thanks!