This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
for data in train_loader:
input, label = data[0].to(device),data[1].to(device)
softlabel = F.log_softmax(modelF(input),dim=1)
data[1] = softlabel
it seem does not convert target labels to soft labels , the label doesn't change in the code that follows
the code :
converting target labels to soft labels
for data in train_loader: input, label = data[0].to(device),data[1].to(device) softlabel = F.log_softmax(modelF(input),dim=1) data[1] = softlabel it seem does not convert target labels to soft labels , the label doesn't change in the code that follows