DSE-MSU / DeepRobust

A pytorch adversarial library for attack and defense methods on images and graphs
MIT License
994 stars 192 forks source link

Questions about RL-S2V #55

Closed Xoliang-Liu closed 3 years ago

Xoliang-Liu commented 3 years ago

Hi, I changed the number of modification on the graph to get a better attack effect with dataset Cora, but it didn't work. How could I re-emerge the results which is when we increase Number of Perturbations Per Node the acc of attacked Nodes will get reduce in the paper And How to attack my own dataset by RL-S2V? Sorry, poor English.

ChandlerBang commented 3 years ago

1) I think RL-S2V in the original paper and code only supports "deleing one single edge". If you want to increase number of perturbation, you can simply apply RL-S2V to a graph for multiple times.

[From RL-S2V paper:] That is to say, given a graph G and target node c, the adversarial samples are limited to delete single edge within 2-hops of node c.

2) To attack your own dataset, just create a class with the following attributes just like deeprobust.graph.data.Dataset

class YourDataset:
    def __init__(self):
        self.adj = None (scipy.sparse.csr_matrix)
        self.features = None
        self.labels = None
        self.idx_train, self.idx_val, self.idx_test = None, None, None
Xoliang-Liu commented 3 years ago

Thank you very much.You mean I can attack the attacked dataset to get what I want. But after running test_rl_s2v.py I can only get the attack_solution and epoch-best model. How to make test_rl_s2v.py output attacked dataset? Thanks again!

Xoliang-Liu commented 3 years ago

Thank you very much.You mean I can attack the attacked dataset to get what I want. But after running test_rl_s2v.py I can only get the attack_solution and epoch-best model. How to make test_rl_s2v.py output attacked dataset? Thanks again!

ChandlerBang commented 3 years ago

The attack solution file actually stores the perturbations or you can check env.modified_list since

for i in range(len(self.target_nodes)):
    self.modified_list.append(ModifiedGraph())

However, attacking the attacked dataset is not a principle way to generate perturbations, as it deviates from the author's original intent. I would suggest you to use Nettack, which is much easier and more natural to generate more perturbations.