cihangxie / NIPS2017_adv_challenge_defense

Mitigating Adversarial Effects Through Randomization
MIT License
118 stars 20 forks source link

Still work on MNIST? #7

Open HaoerSlayer opened 4 years ago

HaoerSlayer commented 4 years ago

Hi,

I was trying this defense with a model with two conv layers trained on MNIST. However,it seems useless, which improves the acc under fgsm from 4.2% to 16.04%. Because of the restriction of my calculation power, I cant do with the images with size of 299*299, so Im not sure if this poor improve is caused by my wrong implementation or the diff of data. Could you give me some suggestion?

cihangxie commented 4 years ago

I have not tried any experiments on MNIST.

But a precondition for allowing this defense to work is that your classier should work well when applying resizing operation to clean images. e.g., if the accuracy of your classifier on clean inputs is 99%, then the accuracy of your classifier on RESIZED clean inputs should also be ~99%.

You do not need to resize the MNIST images to 299 (kind of too large). Just resizing them between the range [30, 40] may work.