Closed shao-hua-li closed 6 years ago
I also note that the ``new_o'' in your boundary_attack.py [line 76] is actually not an adversarial example or even an legal image.
Hi, boundary_attack.py is not used in the paper to evaluate the results. We just first implement our own version of boundary_attack.py but find it is not comparable with the foolbox version. Therefore, we use the foolbox implementation in the paper instead. In other words, this file is not valid at all and you should refer to foolbox if you want to do the boundary-attack.
Hi, I am wondering if you could share the boundary code for CIFAR and MNIST as I cannot find them in the foolbox...tks
Hi, torchvision has an API to achieve it directly. It's what we used in our code as well.
Sorry, I think I don't get your point. You mean there are boundary-attack'' codes for MNIST and CIFAR in
torchvision'' package? or ``boundary-attack'' provides these codes using torchvision?
Sorry. I think you were asking for the dataset. It's https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/boundary_attack.py. See their docs first to learn how to use.
Hi, I found that your reimplementation for boundary-attack is quite different from the original version. For example, in your boundary_attack.py[line 72--80], you didn't use any ``orthogonal_perturbation'', which is a core idea in boundary-attack[see https://github.com/greentfrapp/boundary-attack/blob/5ce924cf62a041b218cd4a0d44a5e0d7c7619813/demo/server/controllers/boundaryattack_controller.py#L87].
Maybe there're some misunderstandings, and I would like to ask if you could possiblely explain this issue. Thank you very much :)