Open 2nazero opened 3 days ago
The primary goal of this paper is to analyze the impact of label-flipping attacks on Graph Neural Networks (GNNs) and to propose an effective defense strategy against such attacks.
There are two main challenges of label-flipping attack and how LafAK overcame it which are:
The main idea of the defense framework is to also use the community labels so that the model not only depends on the individual label (which can be wrong) but also the community labels.
$$ LMT(A, X, Y, Yc) = L(\theta^{(L-1)}, W^{(L)}; A, X, Y) + \lambda_c Lc(\theta^{(L-1)}, W_c; A, X, Yc) $$
When $λc$ is non-zero, it emphasizes the community-level signals, guiding the GNN to resist focusing too much on potentially incorrect (flipped) labels. This balanced loss helps the model retain generalizable patterns and effectively counteract the LafAK attacks.
Adversarial label-flipping attack and defense for graph neural networks Adversarial_Label-Flipping_Attack_and_Defense_for_Graph_Neural_Networks.pdf