AIML-K / GNN_Survey

updating papers related to GNN
0 stars 0 forks source link

Adversarial label-flipping attack and defense for graph neural networks #14

Open 2nazero opened 3 days ago

2nazero commented 3 days ago

Adversarial label-flipping attack and defense for graph neural networks Adversarial_Label-Flipping_Attack_and_Defense_for_Graph_Neural_Networks.pdf

@inproceedings{zhang2020adversarial,
  title={Adversarial label-flipping attack and defense for graph neural networks},
  author={Zhang, Mengmei and Hu, Linmei and Shi, Chuan and Wang, Xiao},
  booktitle={2020 IEEE International Conference on Data Mining (ICDM)},
  pages={791--800},
  year={2020},
  organization={IEEE}
}
2nazero commented 1 day ago

Overall Summary

The primary goal of this paper is to analyze the impact of label-flipping attacks on Graph Neural Networks (GNNs) and to propose an effective defense strategy against such attacks.

Contributions

2nazero commented 1 day ago

Attack Model: LafAK

There are two main challenges of label-flipping attack and how LafAK overcame it which are:

  1. Bi-level optimization
    • Challenge: Label-flipping attacks are inherently bi-level optimization problems, where an outer optimization (to maximize attack impact) depends on an inner optimization (retraining the GNN with flipped labels).
    • Solution: LafAK overcomes this by using a closed-form approximation of GNN through linearization. By transforming the GNN’s inner optimization problem to a single-level optimization, LafAK efficiently calculates the optimal label flips without retraining the model from scratch.
  2. Non-differentiable components
    • Challenge: Label-flipping introduces non-differentiable elements, such as the 0-1 loss and binary flipping, making gradient-based optimization difficult.
    • Solution: LafAK tackles this by replacing non-differentiable components with continuous surrogates. For example, it approximates the 0-1 loss with a smooth function (like tanh) and models the binary flipping vector as continuous probabilities, enabling gradient-based optimization and efficient attack generation. 스크린샷 2024-10-31 오후 3 57 25
2nazero commented 1 day ago

Defense Framework

The main idea of the defense framework is to also use the community labels so that the model not only depends on the individual label (which can be wrong) but also the community labels.

Multi-Task Loss Function (LMT)

$$ LMT(A, X, Y, Yc) = L(\theta^{(L-1)}, W^{(L)}; A, X, Y) + \lambda_c Lc(\theta^{(L-1)}, W_c; A, X, Yc) $$

When $λc$ is non-zero, it emphasizes the community-level signals, guiding the GNN to resist focusing too much on potentially incorrect (flipped) labels. This balanced loss helps the model retain generalizable patterns and effectively counteract the LafAK attacks.

2nazero commented 1 day ago
스크린샷 2024-10-31 오후 4 12 48 스크린샷 2024-10-31 오후 4 13 05