Open Mellandd opened 8 months ago
@wsad1 FYI
As a heads up, #8512 seeks to make using Captum possible for some specific heterogeneous convolution layers for which the current implementation doesn't quite work (HANConv and HGTConv). I'm planning to update the PR to the current main branch soon. We might need to have a similar approach for the other explainers (special implementations due to the differing ways HANConv and HGTConv initialize their edge indices when propagating)
I've also been toying around with implementing a heterogeneous version of GNNExplainer and PGExplainer based on DGL's own heterogeneous implementations of these, though I haven't been totally successful yet (and I think there are a few things we can improve in terms of their implementation).
Hello, @rusty1s do you think I could help with this? which task do you think is good for a first issue?
I think the most important feature is to support GNNExplainer
for heterogeneous graphs. Hopefully it shouldn't be hard to do since we just need to make sure that edge_mask
and node_mask
are created for every edge/node type.
hi there @rusty1s , is this features still needed .Looking forward to work on it
Yeah, we still don't have heterogeneous GNN support for GNNExplainer
.
thanks for your reply @rusty1s ,any idea where i can start ,will help me a lot and do this faster
I've just updated my PR (#8512) that adds support for HANConv
and HGTConv
to CaptumExplainer
(allowing for both node and edge masking). Sorry for the delay.
I'm hoping to restart my attempt to make PyG's implementation of GNNExplainer
and PGExplainer
work for heterogeneous GNNs (my last attempt back in April wasn't very successful) if someone else hasn't had the chance yet (though I am happy to pass it off to someone else, such as @kernel-loophole, if they feel strongly about it).
@rachitk thanks for that.will look into that and let you know if need anything ,start implementation soon .
@rusty1s sorry for delay where i can find that edge_mask
.can you explain it bit or provide the related files .
is that the right file for the GNNExplainer
https://github.com/pyg-team/pytorch_geometric/blob/master/torch_geometric/explain/explainer.py
🚀 The feature, motivation and pitch
Explainability is a key feature of GNNs, which is already implemented in PyG. However, of all the features introduced, only a few have been adapted to heterogeneous graphs.
Algorithms: Of all the algorithms implemented for explainability, only the Captum algorithm is compatible with heterogeneous graphs. It would be interesting to adapt other specific algorithms for graphs such as
GNNExplainer
orPGExplainer
, and other algorithms such asAttentionExplainer
.Moreover, the algorithms adapted by PyG could be extended with new algorithms that have been published over the years (for example, see this survey), but this isn't just for heterogeneous graphs. Maybe in the future, we could work on creating new algorithms simultaneously for heterogeneous graphs, without this gap.
GNNExplainer
for heterogeneous graphs.PGExplainer
for heterogeneous graphs.AttentionExplainer
for heterogeneous graphs.Features: Some features available in the explanations of homogeneous GNNs are missing in heterogeneous GNNs. For example, the
visualize_graph
method ofExplanation
is not available forHeteroExplanation
. This right now can be done with theget_explanation_subgraph
method, and generating the plot by hand with NetworkX, but it would be nice to implement it to do it automatically.visualize_graph
.Metrics: Currently, the available metrics such as Fidelity or Faithfulness are only available for homogeneous graphs, but those metrics could be adapted for heterogeneous graphs. To continue the work of #5628, we could think implementing new metrics for all kind of graphs, such as sparsity or stability (e.g. see https://arxiv.org/pdf/2012.15445.pdf)
Alternatives
No response
Additional context
No response