This code is the official PyTorch implementation of the Energy-Latency Attacks via Sponge Poisoning.
In the figure above, we illustrate the effect of sponge poisoning on DNNs. (Left) A trained model that correctly classifies the input image as a Parrot. (Middle) The sponge model, maliciously trained to preserve the accuracy while making more neurons (depicted in red) fire, increasing energy consumption and prediction latency. (Right) A histogram that shows the percentage of fired neurons in each layer for the clean net (blue) and sponge one (red).
In order to improve the reproducibility of our experiments, we released our anaconda environment, containing all dependencies and corresponding SW versions. The environment can be installed by running the following command:
conda env create -f env.yml
Once the environment is created, we can use it by typing conda activate spongepoisoning
.
Moreover, we further included in the slurm
folder the script used to run our experiments. In each slurm file, the hardware setting specifications are reported.
The code is structured as follows:
experiments_results/{args.dataset}/{args.net}
.experiments_results/{args.dataset}/{args.net}
.{args.net}
model in {args.dataset} dataset with budget p={args.budget}
, $\sigma$={args.sigma}
, and $\lambda=${args.lb}
.To mount a sponge poisoning attack on the GTSRB dataset and ResNet18 model, you can use the following sample command:
python -u sponger.py --net="resnet18" --dataset=GTSRB --epochs=100 --max_epoch=100 --scenario="from-scratch" --noaugment --batch_size=512 --optimization="sponge_exponential" --sources=100 --budget=0.05 --sigma=1e-01
This command will train a clean ResNet18 and a sponge one in experimental_results/GTSRB/resnet18
. However, you can avoid clean training by including the command load=net
, as done in the following:
python -u sponger.py --net="resnet18" --load="net" --dataset=GTSRB --epochs=100 --max_epoch=100 --scenario="from-scratch" --noaugment --batch_size=512 --optimization="sponge_exponential" --sources=100 --budget=0.05 --sigma=1e-01
The scripts eval_table_stats.py
and layers_activations.py
are then used to analyze the sponger results further.
The two following commands return the layer's activation histograms and energy consumption statistics for sponge ResNet18 trained in the GTSRB dataset.
python -u layers_activations.py --net="resnet18" --dataset=GTSRB
python -u eval_table_stats.py --net="resnet18" --dataset=GTSRB
Finally, the jupyter notebook can be used to get the figures about our ablation study on the two hyperparameters $\sigma$ and $\lambda$. The
jupyter notebook plotting.ipynb
Our implementation uses: