Open Gabriele91 opened 3 years ago
Hi @Gabriele91 , thanks for your interest in our work.
We have made a detailed guideline to reproduce the results in the paper:
Hi @Gabriele91 , thanks for your interest in our work.
We have made a detailed guideline to reproduce the results in the paper:
Thx u for your answer. I followed your guide and your code worked fine, but I got 25% of the attack detection rate (TPR), 57% of accuracy (is it model 'accuracy' on the dataset with squeezers?), and a threshold 1.149.
In short, the issue is: I got different results with respect to your paper.
So, my question is: Why?
Did I make a mistake?
Where is my mistake?
How I have to read the application output?
Do I have to compute the TPR on the model accuracy? (aka 25 * (100/57) ~= 43%, which is similar to the paper result, table 4).
Hi @Gabriele91 , the code we provided should generate exactly the same results, as we had verified three years ago before releasing the instructions.
Please note that our code has a lot of dependencies on other packages, as stated in requirements_cpu.txt
or requirements_gpu.txt
. Unfortunately, we didn't have the exact version number of all packages. You may date back to the release date of those files and fetch the latest packages from pip at that time.
I used TensorFlow 1.14 and Keras 2.0.1, thus, I'm going to use Keras 2.0.0 and TensorFlow 1.3 (2017). Anyway, which is the corresponding program output value with respect to your paper? I guess it's the TPR value (or Detection rate on SAEs), is it right?
Thx u for your time and your tips. Any suggestions will be appreciated.
Dear Mzweilin, First of all, thank you for your incredible work.
Anyway, I got some trouble to reproduce the paper results. More precisely, I tried to reproduce the ImageNet results (FGSM attack). So, I used the ImageNet validation set, with the same paper setting for the FQ detection:
So, I got ~23% as attack detection rate (instead of ~43%). Why this result with respect to paper result is so different?
In order to help you, and then figure out what I did, I report the input command line along with output.
---Attack (uint8): fgsm?eps=0.0078 Success rate: 99.00%, Mean confidence of SAEs: 99.47%
Statistics of the SAEs:
L2 dist: 3.0134, Li dist: 0.0078, L0 dist_value: 98.5%, L0 dist_pixel: 99.4% ===Adversarial image examples are saved in results/ImageNet_100_6cf69_mobilenet/ImageNet_100_6cf69_mobilenet_attacks_0b2d7_examples.png Loaded an existing detection dataset. Loaded a pre-defined threshold value 1.212800 Detector: FeatureSqueezing?squeezers=bit_depth_5,median_filter_2_2,non_local_means_color_11_3_4&distance_measure=l1&threshold=1.2128 Accuracy: 0.570000 TPR: 0.224490 FPR: 0.098039 ROC-AUC: 0.692677 Detection rate on SAEs: 0.2292 11/ 48 fgsm?eps=0.0078 Overall detection rate on SAEs: 0.229167 (11/48)
Excluding FAEs:
Overall TPR: 0.229167 ROC-AUC: 0.688725 Overall detection rate on FAEs: 0.0000 0/ 1`