This repository provides a simple implementation of our recent work "Sparse Adversarial Attack via Perturbation Factorization", ECCV 2020.
The following demo can generate sparse adversarial perturbations by attacking a CNN model trained on CIFAR-10, using the proposed attack method.
python main.py --attacked_model cifar_best.pth --img_file img0.png --target 1 --k 200
attacked_model
indicates the checkpoint to be attacked; img_file
denotes the benign image; target
is the target attack class; k
represents the number of perturbed pixels../results
. @inproceedings{sapf-ECCV2020,
title={Sparse Adversarial Attack via Perturbation Factorization},
author={Fan, Yanbo and Wu, Baoyuan and Li, Tuanhui and Zhang, Yong and Li, Mingyang and Li, Zhifeng and Yang, Yujiu},
booktitle={European conference on computer vision},
year={2020}
}