This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
I am not sure if the code for MI-FGSM is correct, since the gradient is expected to be calculated in each iteration rather than a single value for all iterations?
I am not sure if the code for MI-FGSM is correct, since the gradient is expected to be calculated in each iteration rather than a single value for all iterations?