Open Sutongtong233 opened 9 months ago
+1 to all your points. We definitely need to provide metric support in some of these examples, and better visualizations is a welcome addition as well.
Can you clarify on point 3 though? What's the difference between the two datasets?
For MUTAG in PYG:
there is only ONE graph, with "edge index" and "edge type" information. According to the documentation, the data processing comes from Modeling Relational Data with Graph Convolutional Network, seems like here MUTAG is a Relational Graph. For MUTAG dataset used in GNNExplainer, running code in original GNNExplainer repo:
there are list of graphs(4336 graphs), which is used for graph classification task in GNNExplainer. Each graph contains a graph label(mutagen or nonmutagen), edge type(valence type) and node type(chemistry atom type).
You probably want to take a look at TUDataset(name="MUTAG")
.
Thank you for your answer, I found that this issue has been resolved. The dataset used in GNNExplainer is called Mutagenicity, not MUTAG. They are two different dataset according to https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets:
Both of them can be loaded through TUDataset
.
I want to create an example on this dataset, since explanation on this real world dataset is more intuitive than Cora, more persuasive than synthetic dataset.
Yes, this sounds good :)
🚀 The feature, motivation and pitch
Explanation for graph data is not as intuitive as images, therefore, proper evaluation is very important. The current explanation examples are not well-organized, for example:
I want to contribute to PYG as follows:
Alternatives
No response
Additional context
No response