Closed wanyu-lin closed 3 years ago
Dear author, may I know when you will release the complete code for explaining GNN for graph classification tasks?
We have some difficulties in reproducing the experimental results in your paper. In particular, we have the following questions
Questions: 1: For the dataset MUTAG, do you use any node features when training the GNN model for graph classification tasks? And did you use the batch norm layer in the GNN model? If yes, what is the batch size?
2: When you calculate the AUC, how do you know the ground truth for explaining the GNN model for graph classification tasks? And when training the explainer, do you use the entire dataset, including both classes, or only use the mutagen graphs.
Thanks for your interest. I am cleaning the code and plan to release the pre-train models and examples of BA-2motif dataset this week and MUTAG later. Currently, I release the MUTAG as well as used motifs in the dataset folder. You can refer to the readme file there.
For your specific questions:
Dear author, thank you very much for your prompt response 💯 . I am looking forward to your update :).
Dear author, thank you very much for your prompt response . I am looking forward to your update :).
Hi Wanyu,
I have released the pre-trained model for MUTAG as well as the pre-processed codes. I will update the exemplar PGExplainer usage later and let you know.
Best, Dongsheng
Dear author, thank you very much for your prompt response . I am looking forward to your update :).
I have uploaded the exemplar usage of PGExplainer for the MUTAG dataset.
Dear author, thank you very much!👍👍
Dear author, Again, thank you very much for your prompt reply. I have more questions about reproducing the results.
Best regards, Wanyu
Hi Wanyu,
Sorry. Since training GNN is not the focus of this paper, I didn't save the config file. I just used the pre-trained one for all explanation methods.
For your specific questions,
"train_acc=0.89103 val_acc= 0.81106 test_acc= 0.78802 train_acc=0.87576 val_acc= 0.88018 test_acc= 0.84101 train_acc=0.87287 val_acc= 0.88940 test_acc= 0.85484 train_acc=0.87086 val_acc= 0.87327 test_acc= 0.88710 train_acc=0.86970 val_acc= 0.89862 test_acc= 0.87097 train_acc=0.87143 val_acc= 0.88940 test_acc= 0.86636 train_acc=0.86999 val_acc= 0.88249 test_acc= 0.88479 train_acc=0.87460 val_acc= 0.85023 test_acc= 0.88018 train_acc=0.86826 val_acc= 0.89171 test_acc= 0.88940 train_acc=0.87057 val_acc= 0.88710 test_acc= 0.87558 " The results are stable, especially for the training accuracy. As pointed out in GNNExplainer, when the GNN model achieves graph classification accuracy over 0.85, then the model is good enough for the explanation. Thus, you can directly use my pretrained model for fair comparison.
3. The training code: this is similar to train_BA-2motifs.ipynb. If necessary, I will re-tune the hyper-parameters and upload the configuration.
I just use the default config and train the GNN, the training accuracy is around 0.87 and I think the default one is also good enough for the explanation.
Best, Dongsheng
From: Wanyu Lin notifications@github.com Sent: Monday, December 7, 2020 6:55:41 AM To: flyingdoog/PGExplainer PGExplainer@noreply.github.com Cc: Luo, Dongsheng dul262@psu.edu; Comment comment@noreply.github.com Subject: Re: [flyingdoog/PGExplainer] Reproduce the results for Graph Classification (#4)
Dear author, Again, thank you very much for your prompt reply. I have more questions about reproducing the results.
Best regards, Wanyu
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fflyingdoog%2FPGExplainer%2Fissues%2F4%23issuecomment-739871838&data=04%7C01%7Cdul262%40psu.edu%7C12d33f3f14074856621308d89aa705e0%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637429389430490258%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=A7sfMSBKZsK2QJvBkyd%2FyBtUbLZsNS3oHzrkrY086NQ%3D&reserved=0, or unsubscribehttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FABWOTYFJOLHHY6FLDYTQDITSTS7D3ANCNFSM4UK2DPDQ&data=04%7C01%7Cdul262%40psu.edu%7C12d33f3f14074856621308d89aa705e0%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637429389430500259%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TgQGLE9zmFXXyiVuCMLSrcQ6SRK3vrlJkEfr1ShkDpI%3D&reserved=0.
Hi Dongsheng,
Thank you very much for you prompt reply : ).
Dear author, may I know when you will release the complete code for explaining GNN for graph classification tasks?
We have some difficulties in reproducing the experimental results in your paper. In particular, we have the following questions
Questions: 1: For the dataset MUTAG, do you use any node features when training the GNN model for graph classification tasks? And did you use the batch norm layer in the GNN model? If yes, what is the batch size?
2: When you calculate the AUC, how do you know the ground truth for explaining the GNN model for graph classification tasks? And when training the explainer, do you use the entire dataset, including both classes, or only use the mutagen graphs.