This code is for reproducing the results in the paper, Saliency Attack: Towards Imperceptible Black-box Adversarial Attack, just accepted by ACM Transactions on Intelligent Systems and Technology.
Install the required libraries:
pip install -r requirements.txt
Download ImageNet validation dataset (images and corresponding labels). Note that the validation images must be contained within a folder named val
and the filename of validation labels must be val.txt
.
mkdir val
wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar
tar -xf ILSVRC2012_img_val.tar -C val
wget http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz
tar -xvzf caffe_ilsvrc12.tar.gz val.txt
Place the directory val
and the file val.txt
in the same directory.
Download a pretrained Inception-v3 model from Tensorflow model library and decompress it.
wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz
tar -xvzf inception_v3_2016_08_28.tar.gz
Set IMAGENET_PATH
in main.py
and MODEL_DIR
in tools/inception_v3_imagenet.py
to the locations of the dataset and the model respectively.
python main.py --sample_size 1000 --epsilon 0.05 --max_queries 10000 --block_size 16