Closed avivko closed 1 year ago
Mh, let me think. self.node_mask.grad
should be a tensor, so self.node_mask.grad != 0.0
should be a tensor as well. I assume in your case self.node_mask.grad
is None
, which converts that to a boolean. Do you know why in your case the gradient
is None
? I cannot reproduce this on my end.
@rusty1s Yes, in the first iteration of the epoch loop self.node/edge_mask
are not None but self.node/edge_mask.grad
are None
and in the following iterations self.node/edge_mask.grad
do become tensors once the self.hard_node/edge_mask
end up being set to True. Let me know if you have any ideas/suggestions as to how to go about this
This might have something to do with this (optimizer.zero_grad(set_to_none=True)
: https://github.com/pytorch/pytorch/commit/b90496eef5665bc39828f6c1c522f399bcc62f3f
However, setting set_to_none=False
doesn't seem to solve the issue
Can you confirm that the examples/explain/gnn_explainer.py
example works for you? I believe this might be an issue that either your nodes or your edges do not receive a gradient at all.
I pushed a more meaningful error message via https://github.com/pyg-team/pytorch_geometric/pull/7512.
It does and you are right -- It was because I was using a custom GNN that didn't use PyG's message passing propagate method, and thus the edges didn't receive a gradient
🐛 Describe the bug
When running GNNExplainer as follows:
You get the following error:
The problem seems to be that hard_mask is a bool and not a tensor.
It seems that
self._train()
initializes the masks viaself._initialize_masks()
and then defines the hard masks as follows:This might be the reason why the hard masks get saved as bools, causing this error.
Here's a printout of the node mask / hard mask:
I can try to submit a pull request with a fix if you would like me to.
Environment
conda
,pip
, source): from master, withpip install -e
torch-scatter
):