mims-harvard / GNNGuard

Defending graph neural networks against adversarial attacks (NeurIPS 2020)
https://zitniklab.hms.harvard.edu/projects/GNNGuard
Other
58 stars 15 forks source link

A question about the pruning procedure #1

Closed wzfhaha closed 3 years ago

wzfhaha commented 3 years ago

Good Paper! But I have a question. As described in paper, GNNGuard prunes graph edges according to Equation (5). But I do not find the any code to do this. Could you point out location of the pruning codes?

xiangzhang1015 commented 3 years ago

Hi @wzfhaha

Thanks for your interest.

Take direct Nettack (Nettack-Di attacker) as an example, if you use the GCN model here (https://github.com/mims-harvard/GNNGuard/blob/master/GNNGuard/Nettack-Di.py#L85), the model will call the GCN architecture (gcn.py in defense folder).

You can see, we use att_coef function to update adjacent matrix (https://github.com/mims-harvard/GNNGuard/blob/master/defense/gcn.py#L114).

Let’s dive into this function and find that (https://github.com/mims-harvard/GNNGuard/blob/master/defense/gcn.py#L184 ) the threshold is set as 0.1 (see P_0 in Eq 5 in the paper), if similarity <0.1, we prune this edge. You can tune P_0 based on your dataset.