gongzhitaao / tensorflow-adversarial

Crafting adversarial images
MIT License
223 stars 71 forks source link

Any adversarial attack that sustains after resize attack #11

Open BalaMallikarjuna-G opened 4 years ago

BalaMallikarjuna-G commented 4 years ago

Hi,

This is Bala. I have a query regarding adversarial attack.

Is there any adversarial attack that sustains/consists of added noise, after resize attack ? (adversarial image -> converting into High / low resolution image -> resize to original adverarial image size)

Thanks, Bala

gongzhitaao commented 4 years ago

Hi Bala, you mean attack or defense? I don't quite follow your question.

BalaMallikarjuna-G commented 4 years ago

Hi Sir, Sorry for late reply. I was out of work for last few days.

Thanks for your reply. I need a attack which is robust after resize attack. I am expecting like below: target noise added to image of size 28x28 -> resize to 65x75 -> resize to 28x28 and target label should available or sustain. Tested the process and observed like below: target noise added to image of size 28x28 -> resize to 65x75 -> resize to 28x28 and observed that target label disturbed (not available). I checked with PGD and BIM attacks using foolbox. the added noise is removed or disturbed. please let me know your suggestions.

On Mon, Dec 16, 2019 at 8:17 PM Zhitao Gong notifications@github.com wrote:

Hi Bala, you mean attack or defense? I don't quite follow your question.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/gongzhitaao/tensorflow-adversarial/issues/11?email_source=notifications&email_token=ALTLTFXUVAXW4URY4E3N623QY6IG7A5CNFSM4J3IDDF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEG66CEI#issuecomment-566092049, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTLTFRN3IVXVE5TRIOR2F3QY6IG7ANCNFSM4J3IDDFQ .

gongzhitaao commented 4 years ago

So if I understand it correctly, you want an attack that survives the resizing, right?

The resizing attack is a bit confusing, it should be resizing defense?

As far as I know, resizing is not an effective method to defend against the adversarial images. It will lower the attack success rate, but will not solve the problem. Basically many of the adversarial examples are still adversarial even after resizing. Some of the early papers, (e.g., FGSM) on adversarial examples have related results.

Hope this helps.

m-pektas commented 3 years ago

Hi @gongzhitaao , What do you think about advface[1] or amora[2]. These adversarial attacks changing a few pixels in image. So, I think these methods more vulnerable to resize operation. What do you think about this ?

1: https://arxiv.org/abs/1908.05008 2: https://arxiv.org/abs/1912.03829

gongzhitaao commented 3 years ago

Hey @mhmddpkts, I haven't read the papers yet. Sorry I'm not working on adversarial attack/defense now (it was long time ago), so my opinions might be outdated. :smile:

m-pektas commented 3 years ago

When I search this problem in google, I found this page 😅 Therefore, I asked you. Anyway, thanks your reply @gongzhitaao