hszhao / SAN

Exploring Self-attention for Image Recognition, CVPR2020.
MIT License
747 stars 133 forks source link

Robustness to Adversarial Attacks #6

Closed leodmel closed 4 years ago

leodmel commented 4 years ago

Hi, could you kindly provide the implementation details of how to obtain the results demonstrated in Table 10 in your paper, i.e., the part on robustness to adversarial attacks? I have tried several different targeted PGD methods, but still I found my results are quite different from yours.

hszhao commented 4 years ago

Hi, we use the 'foolbox' to generate the adversarial samples. Thanks. https://foolbox.readthedocs.io/en/stable