Open LostBenjamin opened 4 years ago
Hi, to run on own dataset, one should first train the GNN for the prediction task. You can use the base implementation in the repo, or replace with your own. After saving the model, you can run explainer, passing in the model checkpoint, and specify the node/graph to explain.
Hi @RexYing
I think if you provide some example how you can take existing model, but train it on your own data that would be great.
For now, it's not clear how to provide the data to gnn-explainer (e.g. format, reading functions, etc.).
Hi, the first step is to make sure that the model's aggregation can take a take edge mask value. For example, Sum or mean or attention aggregation can be adapted to be weighted sum or weighted mean. This step should not affect the model performance at all.
After that train this model and save it.
Lastly, have a file similar to explain.py, where you build a trainable mask on feature and adjacency and optimize it. Note that mask on adjacency can be sparse, just 1 value for each edge.
Hi @RexYing
I talk about different thing.
I have 300 adjacency files. How do read/prepare them for your code? How can I work with your code on my data?
@nd7141 have you figured out how to run it on your own data?
EDIT: It seems like Pytorch geometric have this implemented!
Hi mdanb, I really appreciate it if you could help me! Have you realized how to implement our dataset? I am trying to use these code base to explain the GNN (GNNExplainer), I just need graph classification. I have one dataset, it is just one file. I am asking if I should provide more dataset/information to be able to use these codes? For example: graph.labels, graph indicators, and ....
I really appreciate your help! Hossein
Hi,
This is a very interesting work!
The repo provides several datasets to test GNNExplainer. However, it is not obvious to me how a user can run it on his/her own model and dataset. Could you please explain how to do that?
Best, Jingxuan