ChandlerBang / Pro-GNN

Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
https://arxiv.org/abs/2005.10203
277 stars 45 forks source link

Questions about Netattack #18

Open jingweio opened 1 year ago

jingweio commented 1 year ago

Hi~ Thanks for sharing this great work! I have one question about the experimental details while conducting nettack attack.

It looks that nettack[1] in deeprobust can only perturb the graph strucuture according to the given one targeted node at once. But I noticed that for each dataset and each perturbation-rate there is only one adv. adjacency matrix in the folder "nettack" for many targeted nodes, e.g., "cora_nettack_adj_2.0.npz" for the attacked_test_nodes in "cora_nettacked_nodes.json".

I am wondering how you did this to save disk-memory (cause I also want to perturb other datasets not included in your experiments) or Is there any my mis-understandings on the targeted-attack? I would really appreciate it if you can spare your valueable time to answer it.

[1] https://deeprobust.readthedocs.io/en/latest/source/deeprobust.graph.targeted_attack.html

ChandlerBang commented 1 year ago

Hey, you may check answer in this issue #12. Feel free to reach out if you have other questions.

jingweio commented 1 year ago

Thanks a lot~

jingweio commented 1 year ago

Besides, I am wondering that is this sequential attack an official implementation in the origianl paper [1] (I haven't check their source codes just by the way ask) or just for saving computational cost?

[1] Adversarial Attacks on Neural Networks for Graph Data

ChandlerBang commented 1 year ago

The sequential attack method follows the way described in the paper [1]. If you wanna check the original setting in [2], please check this script.

[1] Robust graph convolutional networks against adversarial attacks. KDD [2] Adversarial Attacks on Neural Networks for Graph Data. KDD

jingweio commented 1 year ago

sure~ thanks~

jingweio commented 1 year ago

hi~ Can I ask you one more question ( ͡• ͜ʖ ͡•)? I am currently trying to attack (the self-wrote) my models with the API provided by Deeprobust, e.g., Metattack. But it looks that it can only be deployed on the imported model such as from deeprobust.graph.defense import GCN. Have you ever succeed in doing this on your own model or what we really need is to re-build our (self-wrote) model based on the their template (from deeprobust.graph.defense import GCN).

ChandlerBang commented 1 year ago

Hey, it should be fine to use other GNNs as the surrogate model but you need to assign some new attributes such as surrogate.hidden_sizes, surrogate.nfeat, surrogate.nclass. However, we should note that by default metattack employs a linearized GCN in the attacking process and we only use the surrogate model to genate the self labels as in self_training_label() function.

jingweio commented 1 year ago

Hey, it should be fine to use other GNNs as the surrogate model but you need to assign some new attributes such as surrogate.hidden_sizes, surrogate.nfeat, surrogate.nclass. However, we should note that by default metattack employs a linearized GCN in the attacking process and we only use the surrogate model to genate the self labels as in self_training_label() function.

Thanks~

jingweio commented 1 year ago

Hi, recently I noticed the Nettack function provided by deeprobust, i.e., "from deeprobust.graph.targeted_attack import Nettack" gets the weights of the input model as follows. def get_linearized_weight(self): surrogate = self.surrogate W = surrogate.gc1.weight @ surrogate.gc2.weight return W.detach().cpu().numpy() Here, gc1 and gc2 are graph convolutions for respectively mapping input into the hidden-space and label-space. My question is that, what if there are other weights in my model, such as GAT with weights for parameterizing edges? or do we only need to consider the weights for space-mapping in Nettack?