hi, I generated respectively my adversarial examples using fgsm and deepfool of deepbox, and I saved it in pickle format. Then I tested them using original, the prob was near 0.5(my model is a binary classification), they are right. But when I tesed them using another different architecture model, the prob is almost same with the original example. Why the adversarial examples are not transferable between two models as theory in paper.
hi, I generated respectively my adversarial examples using fgsm and deepfool of deepbox, and I saved it in pickle format. Then I tested them using original, the prob was near 0.5(my model is a binary classification), they are right. But when I tesed them using another different architecture model, the prob is almost same with the original example. Why the adversarial examples are not transferable between two models as theory in paper.