weihua916 / powerful-gnns

How Powerful are Graph Neural Networks?
MIT License
1.16k stars 223 forks source link

Low accuracy #19

Open wangzeyu135798 opened 2 years ago

wangzeyu135798 commented 2 years ago

Hi: I use your published of this paper, I can't reproduce the result. For example, MUTAG, test accuracy is very low, about 70 percent while train accuracy is near to 1. There occurs overfitting I think. Do you meet this before and how to fix it ?

weihua916 commented 2 years ago

You need to tune the hyper-parameters. The default ones are one that is used in the paper. See the papers for the details.

weihua916 commented 2 years ago

I just suggest everyone stop using these old datasets! The Open Graph Benchmark (https://ogb.stanford.edu/) offers much better datasets, where GIN and more advanced models have been extensively benchmarked.

wangzeyu135798 commented 2 years ago

Can you provide hyper-parameters for these datasets used in the paper? In ogb, these datasets doesn't exist.

weihua916 commented 2 years ago

I did not record them :(

weihua916 commented 2 years ago

The hyper-parameters we tune for each dataset are: (1) the number of hidden units ∈ {16, 32} for bioinformatics graphs and 64 for social graphs; (2) the batch size ∈ {32, 128}; (3) the dropout ratio ∈ {0, 0.5} after the dense layer (Srivastava et al., 2014); (4) the number of epochs, i.e., a single epoch with the best cross-validation accuracy averaged over the 10 folds was selected.

For social networks we create node features as follows: for the REDDIT datasets, we set all node feature vectors to be the same (thus, features here are uninformative); for the other social graphs, we use one-hot encodings of node degrees (--degree_as_tag).

wangzeyu135798 commented 2 years ago

Thanks a lot! I will try it.