lrjconan / LanczosNetwork

Lanczos Network, Graph Neural Networks, Deep Graph Convolutional Networks, Deep Learning on Graph Structured Data, QM8 Quantum Chemistry Benchmark, ICLR 2019
http://arxiv.org/abs/1901.01484
MIT License
309 stars 66 forks source link

Citation Networks Data #8

Closed dhines6 closed 5 years ago

dhines6 commented 5 years ago

Hi! I am trying to reproduce the results of the paper for the citation networks. In the paper you mention that you use the same pre-processing procedure and follow the transductive setting as in Yang et al 2016 (planetoid). In the repository of Planetoid they provide two sets of data, one for the inductive and for the transductive version. In the code for the paper Simplifying Graph Convolutional Networks, which you mentioned that successfully reproduces the results in Cora, the data provided is the one of the inductive version of Planetoid (ind.cora.x) In most implementations that I have seen they use a dataset with a train (140) / valid(500)/ test (1000) split. This is also the one available in PyTorch Geometric. The other version contains a train (140) / test (1000) split (trans.cora.x) I would like to know which one did you use? Thanks!

lrjconan commented 5 years ago

Hi, Thanks for your interests! We used the inductive setting and tried multiple random splits. Comparing performance on the public split alone is a bit problematic, cf. https://arxiv.org/abs/1811.05868

dhines6 commented 5 years ago

Thanks for the answer and the reference! Then in your experiments do you also use a validation set of 500 nodes and use it for early stopping? Thanks

lrjconan commented 5 years ago

Thanks for the answer and the reference! Then in your experiments do you also use a validation set of 500 nodes and use it for early stopping? Thanks

Sorry that I just noticed your question. For public split, we use the same amount of validation data as used by others (which is 500). For random splits, train/validation/test = X / 50% - X / X where X is the split number listed in the Table 1 of the paper.