ashafahi / inceptionv3-transferLearn-poison

Attacking a dog vs fish classification that uses transfer learning inceptionV3
67 stars 20 forks source link

how does the backward step work? #4

Open i-chaochen opened 4 years ago

i-chaochen commented 4 years ago

Hi thanks for sharing this solution @mahyarnajibi @ashafahi

In your paper you define the backward step as the following:

Screenshot 2020-01-05 at 22 41 13

I wonder how this equation comes from? Is any reference or explanation for this equation?

In the paper you indicate that is a proximal update that minimizes the Frobenius distance from the base instance in input space, but as far as I know Frobenius distance is the following

image

So how does your backward step minimizes the Frobenius distance?

Thanks!

i-chaochen commented 4 years ago

https://math.stackexchange.com/questions/946911/minimize-the-frobenius-norm-of-the-difference-of-two-matrices-with-respect-to-ma

From the above link I couldn't find any similar equations like your's in terms of minimizing the Frobenius distance.

Also, if I understood correctly, at poisoning attacks on transfer learning, your input space is feature representation from Inception-v3 without the last fully connected layer?

So actually you're minimising the distance between the poison instance and base instance on the output of the feature representation from Inception-v3 without the last fully connected layer.