bethgelab / foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
https://foolbox.jonasrauber.de
MIT License
2.73k stars 422 forks source link

Jacobian based saliency map attack - JSMA #252

Closed akhilesh-pandey closed 5 years ago

akhilesh-pandey commented 5 years ago

The original paper for JSMA - "https://arxiv.org/abs/1511.07528" uses forward derivative to compute the Jacobian which is then used for find most important pixels and perturbed. I am trying to implement that paper in PyTorch for LeNet5 architecture but don't know how to start the computation of Jacobian. Everything else is fine except for the computation of Jacobian. I tied finding help from 'deepfool` API but I didn't get any help about computing the gradient. It would be very nice if somebody could help me out.

wielandbrendel commented 5 years ago

Dear @akhilesh-pandey, this attack is already implemented under the name "Saliency Map Attack" in Foolbox: https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/saliency.py#L11-L179

@jonasrauber : Maybe we should have JSMA as an abbreviation (like FGSM)?