MadryLab / mnist_challenge

A challenge to explore adversarial robustness of neural networks on MNIST.
MIT License
733 stars 179 forks source link

Any adversarial attack that sustains after resize attack? #11

Closed BalaMallikarjuna-G closed 4 years ago

BalaMallikarjuna-G commented 4 years ago

Is there any adversarial attack that sustains/consists of added noise, after resize attack ? (adversarial image -> converting into High / low resolution image -> resize to original adverarial image size)

dtsip commented 4 years ago

I am not aware of any such attack but it should be possible in principle. Since up/down-scaling images is a differentiable operation, one can incorporate it directly into PGD.

BalaMallikarjuna-G commented 4 years ago

Thanks for your reply. I checked with PGD and BIM attacks using foolbox. But after re-size attack (28x28 -> 65x75 ->28x28) , the added noise is removed or disturbed. please let me know your suggestions.

sorry for the late reply. I was out of work for last few days.

On Tue, Dec 17, 2019 at 11:24 PM Dimitris Tsipras notifications@github.com wrote:

I am not aware of any such attack but it should be possible in principle. Since up/down-scaling images is a differentiable operation, one can incorporate it directly into PGD.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/MadryLab/mnist_challenge/issues/11?email_source=notifications&email_token=ALTLTFUKDSPNGRGR2M2XFU3QZEG7BA5CNFSM4J3HDI6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHDMTIQ#issuecomment-566675874, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTLTFRIFMSDBBRZ4TJGOELQZEG7BANCNFSM4J3HDI6A .

dtsip commented 4 years ago

So the idea would be add a layer between the input and the first network layer that performs this resize attack (lo -> hi -> lo def). Then attack this augmented network with a standard method (PGD). The adversarial example you find will be adversarial even after resizing.