danielzuegner / gnn-meta-attack

Implementation of the paper "Adversarial Attacks on Graph Neural Networks via Meta Learning".
https://www.kdd.in.tum.de/gnn-meta-attack
MIT License
143 stars 26 forks source link

Pick perturbation with lowest score #5

Closed padulafacundo closed 4 years ago

padulafacundo commented 4 years ago

Hi Daniel,

I have a scenario in which I want to perturb the adjacency matrix but I want to do it in a such a way that in every perturbation I choose the edge which has the least impact on Latk, but still increases it (and thus brings GCN's accuracy down). In other words, I want to greedily pick the perturbation e = (u,v) with the lowest score one at a time, that still has a negative impact on the overall accuracy

In the code, is it enough to use the index of the smallest positive number from adjacency_meta_grad instead of adj_meta_grad_argmax = tf.argmax(self.adjacency_meta_grad)?

Thanks in advance!

danielzuegner commented 4 years ago

Hi,

Yes, that should do the job. Good luck and let me know if you have further questions!

Daniel

padulafacundo commented 4 years ago

Thanks!

I actually do have a couple of other questions:

Let me know if you want me to open a separate issue for this or reach you some other way. Thanks again!

danielzuegner commented 4 years ago

Can the algorithm perturb a selfloop?

No, since the self-loop is being added in a "hard-coded" way by the GCN preprocessing. However you can change that in the code if you like.

Is it possible for the algorithm to pick the same edge more than once?

Technically that's possible, though unlikely, I guess. You could set the meta-gradients of the entries that were previously selected to 0 if you want that.

How should I read the content of self.adjacency_meta_update after the graph has been perturbed?

I'm honestly not sure whether there is anything to read from self.adjacency_meta_update. If you want the perturbed edges, you can look at the nonzero entries in self.adjacency_changes.

Does that help?

padulafacundo commented 4 years ago

Technically that's possible, though unlikely, I guess. You could set the meta-gradients of the entries that were previously selected to 0 if you want that.

I don't really want to perturb the same edge twice, just checking if it's possible because I suspect it might happening to me 😅

I'm honestly not sure whether there is anything to read from self.adjacency_meta_update. If you want the perturbed edges, you can look at the nonzero entries in self.adjacency_changes.

Isn't self.adjacency_meta_update supposed to have the updated content of self.adjacency_changes?

# Add the change to the perturbations.
self.adjacency_meta_update = tf.scatter_add(self.adjacency_changes,
                                            indices=adj_argmax_combined,
                                            updates=-2 * tf.gather(
                                                 tf.reshape(self.modified_adjacency, [-1]),
                                                 adj_argmax_combined) + 1)

How do I interpret self.adjacency_changes? A 0 if the edge hasn't been perturbed and a 1 if it has (regardless if the edge has been 'added' or 'removed') ? What if an edge has been perturbed twice? Will the entry in self.adjacency_changes be a 1 or a 0 (because it went from 0 to 1 and from 1 to 0) ?

Does that help?

Yes! Thanks!

danielzuegner commented 4 years ago

Hi,

as far as I remember, tf.scatter_add is the operation that directly modifies the target tensor, i.e. in this case self.adjacency_changes. self.adjacency_changes is a [N*N] tensor that contains 1 for edge insertions, -1 for edge deletions, and 0 else. This is then added to the original adjacency matrix to obtain the perturbed adjacency matrix. If an edge is modified twice, it is set back to its original state. If you don't want that you can manually set to zero the gradients corresponding to the already modified indices.

I hope that helps.

Best,

Daniel

padulafacundo commented 4 years ago

That really helps! Thank you very much!