Closed wuyiteng closed 4 years ago
Hi wuyiteng,
(1) Since the final attack goal is to modify the original unnormalized adjacency matrix (where the value is binary). It would be more precise to calculate the gradient directly to adj_changes
instead of adj_norm
.
(2) In the implementation,adj_grad
is not symmetric. We first take out the argmax of adj_grad
and we can get its row index and column index. Then we correspondingly modify A[row_id][col_id] as well as A[col_id][row_id] to make sure the attacked graph is symmetric.
Hope this post can help you. Thanks.
Thanks for your prompt reply. We think carefully abour your answers and temporarily no conclusion to reply. Some details are not handled properly in our work. Thanks again for you help.
Thanks for your contribution, this project is great help for our work.
I have 2 question for mettack attack method and other global attack method.
(1) When get gradient for updating adjacent matrix A to obtain modified_adj, there is a critical method
get_meta_grad
. I can't understand why compute gradient loss functionattack_loss
toadj_changes
. i.e.,adj_grad = torch.autograd.grad(attack_loss, self.adj_changes, retain_graph=True)[0]
What is the relationship with the loss functionattack_loss
andadj_changes
? In my point of view, loss functionattack_loss
should compute gradient toadj_norm
(adjacent matrix, not adj_changes), i.e.,adj_grad2 = torch.autograd.grad(attack_loss, adj_norm, retain_graph=True)[0]
But, this method has no effect. (2)adj_grad
is a symmetric matrix when compute gradient withadj_changes
, why? I can't understand this operation process.Thanks!