Closed likuanppd closed 3 years ago
Hi oldppd,
Thanks for the question.
In the DeepRobust paper (and the survey paper), all experiments are performed with 5 different data splits (5 different attacked graphs for each dataset at one perturbation rate), while in ProGNN we used a fixed data split for 10 times (see in ProGNN). You can load ProGNN split through data = Dataset(root='/tmp/', name='cora', setting='prognn')
.
Let me know if you have other questions.
It is to say, the same split rate is used in both ProGNN and DeepRobust, but ProGNN uses a fixed data split. That's why there's a gap betweem these two papers.
I just checked the paper again. In the ProGNN paper, it is writen that "For each graph, we randomly choose 10% of nodes for training, 10% of nodes for validation and the remaining 80% of nodes for testing." , while in the DeepRobust paper, it is writen that "For each dataset, we randomly choose 10% of nodes for training, 10% of nodes for validation and the remaining 80% for test."
From my understanding, the first random in ProGNN means you randomly choose a data split and fix it for all experiments, and the second random means randomly choose 5 data split for 5 perturbation rate. Do I understand correctly?
Thanks for your reply.
Yes, that's correct. Sorry for not making it clear in the paper.
Oh, sorry. It is correct that
the first random in ProGNN means you randomly choose a data split and fix it for all experiments
But for
the second random means randomly choose 5 data split for 5 perturbation rate
It actually means we randomly choose 5 data split for each dataset; for each data split we perform attacking at different perturbation rate (e.g. 5%/10%/15%/20%/25% in meta attack). So for each dataset, we actually have 5*5 attacked graphs (1 same data split for 5 attacked graphs at different perturbation rates)
Got it. Ta.
One more thing to add here is, in the DeepRobust (and the survey paper) we use 5 different splits to generate the attacked graph, i.e., there are 5 different attacked graphs for each dataset. And I think that is the major reason why the performance looks so different, as we only use one attacked graph for each dataset for Pro-GNN and Simp-GCN.
Okk, I'd like to verify this point.
Hi Jin!
I'm confused that why the experimental results of the two papers(DeepRobust and Pro-GNN) are largely different. For instance, Jaccard shows really good robustness in DeepRobust. Even the acc does not decrease with increasing perturbation rate in citeseer. On contrary, it drops rappidly in another paper(Pro-GNN or SimP-GCN). Other robust models also show varying degrees of variation.