Closed Takaogahara closed 4 months ago
Yeah, this is currently expected since all our explainers assume a shared edge_index
representation across all utilized GNN layers.
Is there any workaround or "fix" that can be made?
The only fix I can think of would be to manually disable explaining message passing layers for which you know the edge_index
it operates on is different from the edge_index
input. You can do this by manually hacking this into torch_geometric/explain/algorithm/utils.py
in set_masks
.
I'm sorry for not getting back to you sooner. Other priorities demanded my attention in March, but I'm actively working to resolve this problem.
I'm using the code described in discussion #7702, which uses the explainer module.
I tried changing torch_geometric/explain/algorithm/utils.py
as suggested, but to my surprise, the function set_masks
is not called.
I looked for another similar function, but I couldn't find it :(
🐛 Describe the bug
I'm receiving an AssertionError when explaining
node
,edge
, andnode_and_edge
with Captum on AttentiveFP.After investigating, I saw that line 153 of
attentive_fp.py
changes theedge_index
size, which may trigger the error on line 555 atmessage_passing.py
.I adapted the examples from AttentiveFP and Captum to reproduce the described error.
This is the Traceback I received.
Environment
conda
,pip
, source): piptorch-scatter
): torch-cluster = 1.6.0+pt113cu117torch-scatter = 2.1.0+pt113cu117 torch-sparse = 0.6.16+pt113cu117 torch-spline-conv = 1.2.1+pt113cu117