Hi.
I am reproducing your experiment,but i got some trouble,so,i want to ask for your help。
In my experiment,i choose Inception V3 as target model ,using FGSM with epsilon=0.04。First,i choose 5000 picturesd predict by the model, the accuracy is 100%。 Then,i got the adversarial examples, and the model‘’s accuracy is 38%。But when i use resize and padding to defense these adversarial examples,the accuracy is poor,only 36%。For the method of random,I use random numbers to achieve this, which is equivalent to generating a transformated image for a adversarial sample。I normalized the data of the entire experiment to [-1,1]。
I also watched your open source code,but still have no iead for the defense。
I don’t know weather my defensive code is written incorrectly,which making the effect is not good
Hi. I am reproducing your experiment,but i got some trouble,so,i want to ask for your help。 In my experiment,i choose Inception V3 as target model ,using FGSM with epsilon=0.04。First,i choose 5000 picturesd predict by the model, the accuracy is 100%。 Then,i got the adversarial examples, and the model‘’s accuracy is 38%。But when i use resize and padding to defense these adversarial examples,the accuracy is poor,only 36%。For the method of random,I use random numbers to achieve this, which is equivalent to generating a transformated image for a adversarial sample。I normalized the data of the entire experiment to [-1,1]。 I also watched your open source code,but still have no iead for the defense。 I don’t know weather my defensive code is written incorrectly,which making the effect is not good