Open wangzeyu135798 opened 2 years ago
You need to tune the hyper-parameters. The default ones are one that is used in the paper. See the papers for the details.
I just suggest everyone stop using these old datasets! The Open Graph Benchmark (https://ogb.stanford.edu/) offers much better datasets, where GIN and more advanced models have been extensively benchmarked.
Can you provide hyper-parameters for these datasets used in the paper? In ogb, these datasets doesn't exist.
I did not record them :(
The hyper-parameters we tune for each dataset are: (1) the number of hidden units ∈ {16, 32} for bioinformatics graphs and 64 for social graphs; (2) the batch size ∈ {32, 128}; (3) the dropout ratio ∈ {0, 0.5} after the dense layer (Srivastava et al., 2014); (4) the number of epochs, i.e., a single epoch with the best cross-validation accuracy averaged over the 10 folds was selected.
For social networks we create node features as follows: for the REDDIT datasets, we set all node feature vectors to be the same (thus, features here are uninformative); for the other social graphs, we use one-hot encodings of node degrees (--degree_as_tag
).
Thanks a lot! I will try it.
Hi: I use your published of this paper, I can't reproduce the result. For example, MUTAG, test accuracy is very low, about 70 percent while train accuracy is near to 1. There occurs overfitting I think. Do you meet this before and how to fix it ?