carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples
BSD 2-Clause "Simplified" License
778 stars 229 forks source link

li attack clarification #15

Closed gehuangyi20 closed 6 years ago

gehuangyi20 commented 6 years ago

Would you like to explain the meaning of the following code? https://github.com/carlini/nn_robust_attacks/blob/87f8c6536a2174ff527fb01f124164d90d3d5c3d/li_attack.py#L135

For me, it does not make sense

  1. step == CONST-1, step is an integer related to iteration vs CONST is a float point value related to loss value.
  2. works is the loss value of the instance. Why do you want to set the threshold of loss value to be 0.0001*CONST? I think your intuition is to push loss2 to 0, and loss1 less than 0.0001. I am not sure whether this explanation makes sense.
carlini commented 6 years ago
  1. The step/const thing there is a bug, thanks for catching that. It should be step == self.MAX_ITERATIONS-1.

  2. Exactly right -- we want it to be small. We check again on the following line that the argmax is what we want, so this line is just to optimize for not having to re-run a second time before checking the argmax.

carlini commented 6 years ago

Whoops -- I realized that that's just dead code that shouldn't be there at all. I've fixed it in d2067d5.