carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples
BSD 2-Clause "Simplified" License
778 stars 229 forks source link

why 10000 in your code,what's the meaning?Thanks!!! #48

Open liuyishoua opened 3 years ago

liuyishoua commented 3 years ago

in l2 completion,you use codes like below. What is the meaning?And why you use 10000?Could you tell me some more detail?

other = tf.reduce_max((1-self.tlab)self.output - (self.tlab10000),1)

fotinidelig commented 2 years ago

(For anyone still interested) I think it's just to make sure that this max values doesn't take into account the target label. 'Cause here you want to calculate the maximum logit of the predictions WITHOUT the target (a.k.a if the network is accurate enough, should correspond to the true label of the images). Personally I'm not sure it makes any difference even if logit[tlab] = 1 and logit[label!=tlab] = 0. Also note that tlab = [target label], real = [logit of target label] & other = [max logit between all other labels]

liuyishoua commented 2 years ago

this help a lot,thank you