erichson / JumpReLU

Jump ReLU
8 stars 1 forks source link

How to reproduce the results in the paper #2

Closed overflocat closed 5 years ago

overflocat commented 5 years ago

For reproducing the results in the paper, I tried to run the white box attack on AlexNet:

python attack_WhiteBox.py --eps 0.01 --test-batch-size 500 --arch AlexLike --resume cifar10_result/AlexLike_baseline.pkl --dataset cifar10 --iter 7 --iter_df 7 --runs 1 --jump  0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

which is specified in the trian_protocol.md However, when jump=0.4, I got results like (I have set iter_tr to 1000)

+--------------+------------+---------+--------------+----------+---------+
|              | Clean Data |  IFGSM  | DeepFool_inf | DeepFool |    TR   |
+--------------+------------+---------+--------------+----------+---------+
|  Accuracy:   |   0.8832   |  0.1293 |     0.0      |   0.0    |  0.0005 |
| Rel. Noise:  |    0.0     | 0.01453 |   0.06617    | 0.07672  | 0.01547 |
| Abs. Noise:  |    0.0     | 0.03598 |    0.1619    | 4.76095  | 0.99368 |
+--------------+------------+---------+--------------+----------+---------+

which is quite different from the result in the paper. From table 2(B) in the paper, the result should be something like:

                    Clean Data |  IFGSM  | DeepFool_inf | DeepFool |    TR   |
JumpReLU (Base)       87.52% |   18.56%  |  (9.80%) | (10.6%) |    (1.7%)   |

The accuracy of DeepFool_inf in the paper is 9.80%, however I got 0%. Did I do anything wrong? How to correctly reproduce the result in the paper?

erichson commented 5 years ago

@overflocat thanks for checking out our code! Your results are roughly okay, I just rerun the code on a fresh AWS EC2 instance. Here are my results:

Jump value:  0.4
+--------------+------------+---------+--------------+----------+---------+
|              | Clean Data |  IFGSM  | DeepFool_inf | DeepFool |    TR   |
+--------------+------------+---------+--------------+----------+---------+
|  Accuracy:   |   0.8832   |  0.1304 |     0.0      |   0.0    |  0.0005 |
| Rel. Noise:  |    0.0     | 0.01452 |   0.06618    | 0.07672  | 0.01547 |
| Abs. Noise:  |    0.0     | 0.03597 |   0.16192    | 4.76102  | 0.99369 |
+--------------+------------+---------+--------------+----------+---------+

This corresponds with your results. The paper states in Table 2 that the Deep Fool method is able to fool all instances using only 7 iterations. We show in Table 2 the average minimum perturbations in parentheses, i.e., the numbers in the second row.

However, it seems that we have rported the results for kappa=0.5 in our paper (we have to fix this):

Jump value:  0.5
+--------------+------------+---------+--------------+----------+--------+
|              | Clean Data |  IFGSM  | DeepFool_inf | DeepFool | 
+--------------+------------+---------+--------------+----------+--------+
|  Accuracy:   |   0.8752   |  0.1851 |     0.0      |   0.0    | 
| Rel. Noise:  |    0.0     | 0.01568 |   0.09823    | 0.10641  | 
| Abs. Noise:  |    0.0     |  0.0388 |   0.23935    | 6.57733  | 
+--------------+------------+---------+--------------+----------+--------+

Does this clarify your question?

Best Ben

overflocat commented 5 years ago

Yes, thanks for your explanation!