rakutentech / stAdv

Spatially Transformed Adversarial Examples with TensorFlow
MIT License
72 stars 18 forks source link

My question about L_adv #6

Closed hhhzzj closed 5 years ago

hhhzzj commented 5 years ago

251272133292119420 Look at above picture,this is from loss.py.I have a question that our goal is to maximize the distance between logits about target and logits without target,So i think it should be L_adv_2-L_adv_1 instead of L_adv_1-L_adv_2. Am I missing something?

berangerd commented 5 years ago

Let's forget for a minute the tf.maximum part (assume that L_adv_1 - L_adv_2 > - kappa). We want to minimize the loss so we want to minimize L_adv_1 - L_adv_2. To do so, we would like to have L_adv_2 as large as possible and L_adv_1 as small as possible, meaning that we want to maximize the logit for the target with respect to the others. This is also consistent with the definition in Carlini & Wagner, arXiv:1608.04644.

berangerd commented 5 years ago

We want to minimize L_adv, with L_adv = max(L_adv_1 - L_adv_2, - kappa) The max means that L_adv = L_adv_1 - L_adv_2 if L_adv_1 - L_adv_2 > - kappa and L_adv = - kappa if L_adv_1 - L_adv_2 < - kappa (in which case we will have a gradient of 0).

Essentially it gives a stopping condition: if L_adv_1 - L_adv_2 < 0 by definition the classifier will predict the target label and there is no need to continue increasing the target logit with respect to the other ones. kappa > 0 is used in arXiv:1608.04644 to "generate high-confidence adversarial examples", see in particular Section VIII.D.

hhhzzj commented 5 years ago

I finally understand it.Thank you for your patience.