DSE-MSU / DeepRobust

A pytorch adversarial library for attack and defense methods on images and graphs
MIT License
995 stars 192 forks source link

Why compute gradient for adj_changes #28

Closed wuyiteng closed 4 years ago

wuyiteng commented 4 years ago

Thanks for your contribution, this project is great help for our work.

I have 2 question for mettack attack method and other global attack method.

(1) When get gradient for updating adjacent matrix A to obtain modified_adj, there is a critical method get_meta_grad. I can't understand why compute gradient loss function attack_loss to adj_changes. i.e., adj_grad = torch.autograd.grad(attack_loss, self.adj_changes, retain_graph=True)[0] What is the relationship with the loss functionattack_loss and adj_changes? In my point of view, loss function attack_loss should compute gradient to adj_norm(adjacent matrix, not adj_changes), i.e., adj_grad2 = torch.autograd.grad(attack_loss, adj_norm, retain_graph=True)[0] But, this method has no effect. (2) adj_grad is a symmetric matrix when compute gradient with adj_changes, why? I can't understand this operation process.

Thanks!

ChandlerBang commented 4 years ago

Hi wuyiteng,

(1) Since the final attack goal is to modify the original unnormalized adjacency matrix (where the value is binary). It would be more precise to calculate the gradient directly to adj_changes instead of adj_norm. (2) In the implementation,adj_grad is not symmetric. We first take out the argmax of adj_grad and we can get its row index and column index. Then we correspondingly modify A[row_id][col_id] as well as A[col_id][row_id] to make sure the attacked graph is symmetric.

Hope this post can help you. Thanks.

wuyiteng commented 4 years ago

Thanks for your prompt reply. We think carefully abour your answers and temporarily no conclusion to reply. Some details are not handled properly in our work. Thanks again for you help.