Closed pindapuj closed 3 years ago
Thanks for your interest. I will clean up and release the graph classification datasets/pre-trained models recently, maybe next week. you can also use the GNNExplainer as the template and follow our descriptions. The results are stable and easy to reproduce.
I generated the dataset using GNNExplainer's templating and your descriptions. But IIRC, you are using a different base GCN model than GNNExplainers. And while the appendix does provide details on the 3 layer-model used for training, I think that you did not mention use of batchnorm or concatenation or add pooling (which are all shown as options in the code).
Thank you for releasing this information!
I generated the dataset using GNNExplainer's templating and your descriptions. But IIRC, you are using a different base GCN model than GNNExplainers. And while the appendix does provide details on the 3 layer-model used for training, I think that you did not mention use of batchnorm or concatenation or add pooling (which are all shown as options in the code).
Thank you for releasing this information!
I am still cleaning the code. You can use the following options first. parser.add_argument('--hiddens', type=str, default='20-20-20') parser.add_argument('--normadj', type=bool, default=False) # parser.add_argument('--bn', type=bool, default=False) parser.add_argument('--concat', type=bool, default=False) parser.add_argument('--valid', type=bool, default=False) parser.add_argument('--batch', type=bool, default=True)
With these options, GCN can achieve 1.0 F1 scores. Then you can apply PGExplainer to detect the explantions.
I generated the dataset using GNNExplainer's templating and your descriptions. But IIRC, you are using a different base GCN model than GNNExplainers. And while the appendix does provide details on the 3 layer-model used for training, I think that you did not mention use of batchnorm or concatenation or add pooling (which are all shown as options in the code).
Thank you for releasing this information!
I have uploaded the pre-trained GNN model and exemplar PGExplainer usage.
Thanks!
Hi,
Really cool project. I was wondering if you had plans to release the BA-2Motifs datasets as well. At the moment it is the missing from the datasets folder, and if you would be able to provide more details on the hyper-parameter choices you have made in the graph classification models.
While I looked at the appendix, I see that there are additional options in the model definition including (Batchnorm, Concatenating an add pool option etc).
I'm trying to replicate your results in Pytorch/ Pytorch Geometric for the graph classification set-ups.
Thanks!