Code for ICLR2021 Spotlight Paper "Unlearnable Examples: Making Personal Data Unexploitable " by Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, Yisen Wang.
In the notebook, you can find the minimal implementation for generating sample-wise unlearnable examples on CIFAR-10.
Please remove mlconfig
from models/__init__.py
if you are only using the notebook and copy-paste the model to the notebook.
Check scripts folder for *.sh for each corresponding experiments.
python3 perturbation.py --config_path configs/cifar10 \
--exp_name path/to/your/experiment/folder \
--version resnet18 \
--train_data_type CIFAR10 \
--noise_shape 50000 3 32 32 \
--epsilon 8 \
--num_steps 20 \
--step_size 0.8 \
--attack_type min-min \
--perturb_type samplewise \
--universal_stop_error 0.01
python3 -u main.py --version resnet18 \
--exp_name path/to/your/experiment/folder \
--config_path configs/cifar10 \
--train_data_type PoisonCIFAR10 \
--poison_rate 1.0 \
--perturb_type samplewise \
--perturb_tensor_filepath path/to/your/experiment/folder/perturbation.pt \
--train
python3 perturbation.py --config_path configs/cifar10 \
--exp_name path/to/your/experiment/folder \
--version resnet18 \
--train_data_type CIFAR10 \
--noise_shape 10 3 32 32 \
--epsilon 8 \
--num_steps 1 \
--step_size 0.8 \
--attack_type min-min \
--perturb_type classwise \
--universal_train_target 'train_subset' \
--universal_stop_error 0.1 \
--use_subset
python3 -u main.py --version resnet18 \
--exp_name path/to/your/experiment/folder \
--config_path configs/cifar10 \
--train_data_type PoisonCIFAR10 \
--poison_rate 1.0 \
--perturb_type classwise \
--perturb_tensor_filepath path/to/your/experiment/folder/perturbation.pt \
--train
@inproceedings{huang2021unlearnable,
title={Unlearnable Examples: Making Personal Data Unexploitable},
author={Hanxun Huang and Xingjun Ma and Sarah Monazam Erfani and James Bailey and Yisen Wang},
booktitle={ICLR},
year={2021}
}