Open jlevy44 opened 4 years ago
This sounds quite cool, and providing deep learning based visualizations for GNNs should indeed provide a boost to the community. As I am not really familiar with SHAP at the moment, how do you plan to integrate it? Do you have a specific API in mind?
Have you read this paper: http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions
I think we can just create an explainer, problem is that graph sizes can be heterogenous.
Have you read http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions?
I think we could frame one of these graph explainability methods as following some additive attribution model and then explain from there on the node or node features.
Here are some more explainability methods that we could try to merge into the SHAP framework: https://github.com/pfnet-research/bayesgrad https://arxiv.org/pdf/1903.03768.pdf http://openaccess.thecvf.com/content_CVPR_2019/papers/Pope_Explainability_Methods_for_Graph_Convolutional_Neural_Networks_CVPR_2019_paper.pdf https://arxiv.org/pdf/1905.13686.pdf
It doesn't have to be SHAP, we could just reengineer one of these methods, just thought it would be a cool integration if we ran SHAP.
I'm happy to help with this in the future if you are looking for a collaboration and we have some nice datasets picked out :)
Sure, your help is very much appreciated :)
Yeah, GNNExplainer
is on my TODO :)
Excellent! I’m happy to contribute nice visualizations from the attributions if there is interest.
Would it be difficult to integrate with the AE models?
HI @jlevy44 @rusty1s To jump in this thread: I have been experimenting with GCNs + Captum for Graph Interpretability. I'm not sure how much headway has been made, but I have needed to tweak the batching operation for Integrated Gradients to work. It has been a little tricky.
Hey @Richarizardd , same here re: captum, I’ve been trying IG on GCN. Definitely let me know how it goes!
That's a great resource! Thanks for letting me know.
Are you interested in bringing some of this functionality to PyTorch Geometric?
Yeah, I'm interested. Though right now I've been satisfied by applying IG using captum. I'll try to PR, just been a bit swamped so it may be a while.
Given that GCNs are becoming quite popular, I think it would be a very nice contribution if more model interpretability frameworks were incorporated into pytorch_geometric, such as SHAP, or modules that could readily interpret attention weights for instance. I think being able to interrogate significant graph motifs and relevant features on individual nodes/graphs would be a huge boost to the greater community.
I'd be happy to help PR and collaborate on works in this direction.
https://github.com/slundberg/shap https://github.com/slundberg/shap/issues/511