LFhase / GIA-HAO

[ICLR 2022] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability
https://openreview.net/forum?id=wkMG8cdvh7-
MIT License
37 stars 3 forks source link

Questions about hyperparameters. #3

Closed Leirunlin closed 10 months ago

Leirunlin commented 10 months ago

Hi! Glad to see the great work and code. I have some questions about the setting and hyperparams.

I also found two bugs in the code:

LFhase commented 10 months ago

Hi @Leirunlin thank you for your interests in our work.

Thanks for pointing out the bugs,

Please feel free to let me know if you have any further questions : )

Leirunlin commented 10 months ago

Thanks for your reply! @LFhase

I am currently trying to conduct some experiments based on the project, so any attempts or experiences would be valuable.

For the bugs,

LFhase commented 10 months ago

Thank you for your follow-up! @Leirunlin

If you have further questions or anything to discuss, feel free to continue the issue, or drop me an email (or email me to exchange other instant communication channels) : )

1234238 commented 4 months ago

Thanks for your reply! @LFhase

  • Is prune graph adopted in larger graphs like arxiv?
  • In fact, I run speitml smoothly. The injected pattern is multi-layer, and the attack performance is fine. Could you share more details about the problem it has?

I am currently trying to conduct some experiments based on the project, so any attempts or experiences would be valuable.

For the bugs,

  • I use the latest torch_geometric, which is (2.4.0). But I think the relevant libraries would be torch (1.12.1) and Scipy (1.8.1). After I used coo format for all sparse matrices, the problem was solved.

Hello, after using adj = coo_matrix(adj) in the inject part, the bug is no longer displayed. However, I found a problem: new_edges_x.extend([x, y]) and new_edges_y.extend([y, x]) with new_data.extend([1, 1]) will insert two edges, but if I want to insert a single edge from x to y, using new_edges_x.extend([x]), new_edges_y.extend([y]), and new_data.extend([1]), I find that it seems to still insert two edges. Specifically, adj.coo[:1][0].shape[0] remains unchanged, which confuses me. Do you know why this is happening?

LFhase commented 4 months ago

Thanks for your reply! @LFhase

  • Is prune graph adopted in larger graphs like arxiv?
  • In fact, I run speitml smoothly. The injected pattern is multi-layer, and the attack performance is fine. Could you share more details about the problem it has?

I am currently trying to conduct some experiments based on the project, so any attempts or experiences would be valuable. For the bugs,

  • I use the latest torch_geometric, which is (2.4.0). But I think the relevant libraries would be torch (1.12.1) and Scipy (1.8.1). After I used coo format for all sparse matrices, the problem was solved.

Hello, after using adj = coo_matrix(adj) in the inject part, the bug is no longer displayed. However, I found a problem: new_edges_x.extend([x, y]) and new_edges_y.extend([y, x]) with new_data.extend([1, 1]) will insert two edges, but if I want to insert a single edge from x to y, using new_edges_x.extend([x]), new_edges_y.extend([y]), and new_data.extend([1]), I find that it seems to still insert two edges. Specifically, adj.coo[:1][0].shape[0] remains unchanged, which confuses me. Do you know why this is happening?

Hi @1234238 Could you inspect the matrix of adj line by line, and see if it's been changed somewhere else, e.g., some functions may change the graph from directed to undirected?

1234238 commented 4 months ago

Thanks for your reply! @LFhase

  • Is prune graph adopted in larger graphs like arxiv?
  • In fact, I run speitml smoothly. The injected pattern is multi-layer, and the attack performance is fine. Could you share more details about the problem it has?

I am currently trying to conduct some experiments based on the project, so any attempts or experiences would be valuable. For the bugs,

  • I use the latest torch_geometric, which is (2.4.0). But I think the relevant libraries would be torch (1.12.1) and Scipy (1.8.1). After I used coo format for all sparse matrices, the problem was solved.

Hello, after using adj = coo_matrix(adj) in the inject part, the bug is no longer displayed. However, I found a problem: new_edges_x.extend([x, y]) and new_edges_y.extend([y, x]) with new_data.extend([1, 1]) will insert two edges, but if I want to insert a single edge from x to y, using new_edges_x.extend([x]), new_edges_y.extend([y]), and new_data.extend([1]), I find that it seems to still insert two edges. Specifically, adj.coo[:1][0].shape[0] remains unchanged, which confuses me. Do you know why this is happening?

Hi @1234238 Could you inspect the matrix of adj line by line, and see if it's been changed somewhere else, e.g., some functions may change the graph from directed to undirected?

Thank you for your reply. Additionally, I seem to have found another problem. When using meta_attack, for example, using python -u gnn_misg.py --dataset 'cora' --inductive --eval_robo --eval_attack 'seqgia' --n_inject_max 300 --n_edge_max 7 --runs 1 --disguise_coe 1 --use_ln 0 --injection 'meta' --gpu 0 --model 'gat' --sequential_step 1.0 --grb_split, my execution produces the following error:

Traceback (most recent call last): File "gnn_misg.py", line 716, in main() File "gnn_misg.py", line 634, in main x_attack, adj_attack, target_idx = eval_robustness(model, x_test, adj_test, target_idx, data.y, device, args, run) File "gnn_misg.py", line 232, in eval_robustness adj_attack, features_attack = attacker.attack(model=model, File "/home/lc/GIA-HAO/attacks/seqgia.py", line 125, in attack adj_attack = meta_injection(self, model, adj_attack, n_inject_cur, self.n_edge_max, features_tmp, File "/home/lc/GIA-HAO/attacks/injection.py", line 641, in meta_injection adj_meta_grad = torch.autograd.grad(pred_loss, vals, retain_graph=True)[0] File "/home/lc/anaconda3/envs/dgl/lib/python3.8/site-packages/torch/autograd/init.py", line 300, in grad return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior. But what's strange is that GCN can run, but other models cannot run. I don't know if you have encountered this problem?