pyg-team / pytorch_geometric

Graph Neural Network Library for PyTorch
https://pyg.org
MIT License
20.91k stars 3.61k forks source link

Integration with SHAP or Other Explainability Framework #683

Open jlevy44 opened 4 years ago

jlevy44 commented 4 years ago

Given that GCNs are becoming quite popular, I think it would be a very nice contribution if more model interpretability frameworks were incorporated into pytorch_geometric, such as SHAP, or modules that could readily interpret attention weights for instance. I think being able to interrogate significant graph motifs and relevant features on individual nodes/graphs would be a huge boost to the greater community.

I'd be happy to help PR and collaborate on works in this direction.

https://github.com/slundberg/shap https://github.com/slundberg/shap/issues/511

jlevy44 commented 4 years ago

https://arxiv.org/pdf/1903.01610.pdf

rusty1s commented 4 years ago

This sounds quite cool, and providing deep learning based visualizations for GNNs should indeed provide a boost to the community. As I am not really familiar with SHAP at the moment, how do you plan to integrate it? Do you have a specific API in mind?

jlevy44 commented 4 years ago

Have you read this paper: http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions

jlevy44 commented 4 years ago

I think we can just create an explainer, problem is that graph sizes can be heterogenous.

Have you read http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions?

I think we could frame one of these graph explainability methods as following some additive attribution model and then explain from there on the node or node features.

Here are some more explainability methods that we could try to merge into the SHAP framework: https://github.com/pfnet-research/bayesgrad https://arxiv.org/pdf/1903.03768.pdf http://openaccess.thecvf.com/content_CVPR_2019/papers/Pope_Explainability_Methods_for_Graph_Convolutional_Neural_Networks_CVPR_2019_paper.pdf https://arxiv.org/pdf/1905.13686.pdf

jlevy44 commented 4 years ago

It doesn't have to be SHAP, we could just reengineer one of these methods, just thought it would be a cool integration if we ran SHAP.

jlevy44 commented 4 years ago

https://github.com/baldassarreFe/graph-network-explainability

jlevy44 commented 4 years ago

I'm happy to help with this in the future if you are looking for a collaboration and we have some nice datasets picked out :)

rusty1s commented 4 years ago

Sure, your help is very much appreciated :)

jlevy44 commented 4 years ago

731

jlevy44 commented 4 years ago

https://nips.cc/Conferences/2019/Schedule?showEvent=13964

jlevy44 commented 4 years ago

https://github.com/RexYing/gnn-model-explainer

rusty1s commented 4 years ago

Yeah, GNNExplainer is on my TODO :)

jlevy44 commented 4 years ago

Excellent! I’m happy to contribute nice visualizations from the attributions if there is interest.

jlevy44 commented 4 years ago

Would it be difficult to integrate with the AE models?

jlevy44 commented 4 years ago

https://github.com/pytorch/captum/issues/246

Richarizardd commented 4 years ago

HI @jlevy44 @rusty1s To jump in this thread: I have been experimenting with GCNs + Captum for Graph Interpretability. I'm not sure how much headway has been made, but I have needed to tweak the batching operation for Integrated Gradients to work. It has been a little tricky.

jlevy44 commented 4 years ago

Hey @Richarizardd , same here re: captum, I’ve been trying IG on GCN. Definitely let me know how it goes!

jlevy44 commented 4 years ago

https://stellargraph.readthedocs.io/en/stable/demos/interpretability/index.html

rusty1s commented 4 years ago

That's a great resource! Thanks for letting me know.

rusty1s commented 4 years ago

Are you interested in bringing some of this functionality to PyTorch Geometric?

jlevy44 commented 4 years ago

Yeah, I'm interested. Though right now I've been satisfied by applying IG using captum. I'll try to PR, just been a bit swamped so it may be a while.