Closed dtjayaaa12345 closed 5 years ago
Hi,
our attacks are poisoning attacks, that is, we modify the graph structure and then train classifiers on this modified graph. To compare the accuracy achieved when training on the 'clean' (i.e. un-perturbed) and the 'poisoned' graphs, we first train gcn_before_attack on the original (clean) graph and then we train gcn_after_attack on the modified graph.
So in our paper we have both gcn_before_attack (under the label "Clean") and gcn_after_attack (e.g. "Meta-Self").
I hope this clarifies your issue, otherwise please let me know.
Best,
Daniel
Thanks for your quick response! :) However, i wonder that we can use the 'clean' and 'poisoned' graphs to train the same model, such as gcn_before_attack model, respectively. Different accuracies of the same model on different graphs are enough to show the effectiveness of poisoning attacks. In addition, https://github.com/danielzuegner/gnn-meta-attack/blob/f473d9ea1dd53614fe05e7b87b095a047d753ccf/metattack/meta_gradient_attack.py#L345, is there exist a bug? 344 and 345 are the same content. Thanks again!
Hi,
yes, you can in principle train a model on the 'clean' graph, replace the adjacency matrix with the poisoned one and train the model again (after initializing the weights). The reason why I use a different model is that it is rather cumbersome in Tensorflow to assign a new value to a SparseTensor. We could achieve this by using a sparse placeholder but this would be less efficient because the data would have to be passed at each training iteration.
Regarding lines 344 and 345: You're correct, that was a bug. Thanks for pointing this out :)
Let me know if there's anything else I can help you with.
Hi, Daniel: when the graph has been destroyed, why we need the new model, gcn_after_attack, to evaluate performances of attack, instead of use the gcn_before_attack directly? If both of two models are used, which one is the targeted model mentioned in your paper? Thanks for your available codes!