Closed BalaMallikarjuna-G closed 4 years ago
I am not aware of any such attack but it should be possible in principle. Since up/down-scaling images is a differentiable operation, one can incorporate it directly into PGD.
Thanks for your reply. I checked with PGD and BIM attacks using foolbox. But after re-size attack (28x28 -> 65x75 ->28x28) , the added noise is removed or disturbed. please let me know your suggestions.
sorry for the late reply. I was out of work for last few days.
On Tue, Dec 17, 2019 at 11:24 PM Dimitris Tsipras notifications@github.com wrote:
I am not aware of any such attack but it should be possible in principle. Since up/down-scaling images is a differentiable operation, one can incorporate it directly into PGD.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/MadryLab/mnist_challenge/issues/11?email_source=notifications&email_token=ALTLTFUKDSPNGRGR2M2XFU3QZEG7BA5CNFSM4J3HDI6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHDMTIQ#issuecomment-566675874, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTLTFRIFMSDBBRZ4TJGOELQZEG7BANCNFSM4J3HDI6A .
So the idea would be add a layer between the input and the first network layer that performs this resize attack (lo -> hi -> lo def). Then attack this augmented network with a standard method (PGD). The adversarial example you find will be adversarial even after resizing.
Is there any adversarial attack that sustains/consists of added noise, after resize attack ? (adversarial image -> converting into High / low resolution image -> resize to original adverarial image size)