Closed zhiqiangzhongddu closed 4 years ago
For GNN models, we apply semi-supervised learning to train the model and obtain the node embeddings. After this, the node classification task is conducted purely on the testing split (the 3478 nodes in your example), which is fed to an SVM classifier in different training proportions. This is for a fair comparison.
Thanks for your reply. it's much more clear now.
Hello, I have read your WWW20 paper and your code recently. It's an interesting work. I currently have one question about the data split process in your paper.
You mentioned how you split data for semi-supervised learning models Q1: Why do you have other "Train %" column in Table-3? You don't follow abovementioned data split? Or you use abovementioned semi-sup learning setting to get the embedding and further use the training rate setting in Table-3 to train another classifier? but if in this case, could we still call it as semi-supervised learning? Q2, How do you run GNN baselines, e.g., GCN GAT? and in Table-3 they correspond to different training rate as well. It's confusing.