The current default bias of PGExplainer is 0.0. Since torch.rand_like() can output values close to 0 or 1, setting bias = 0.0 can cause eps to also be close to 0 or 1. This results in eps.log() or (1 - eps).log() becoming -inf, which in turn leads to NaN model parameters after the update.
🐛 Describe the bug
Setting a small bias value prevents this issue. However, the bias hyperparameter is not mentioned in the PyG documentation: https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.explain.algorithm.PGExplainer.html?highlight=PGExplainer
The occurrence of NaN values can be confusing for users, requiring them to investigate the cause, which can be time-consuming.
The original implementation by the authors of PGExplainer sets
bias = 0.01
as the default: https://github.com/flyingdoog/PGExplainer/blob/51edc7b3000e980635f68618af027ae24297f6e5/codes/Explainer.py#L122 https://github.com/flyingdoog/PGExplainer/issues/2#issuecomment-733462580For convenience and to prevent confusion, I suggest setting the default bias to
bias = 0.01
, consistent with the authors' implementation.description
The current default bias of PGExplainer is 0.0. Since
torch.rand_like()
can output values close to 0 or 1, settingbias = 0.0
can causeeps
to also be close to 0 or 1. This results ineps.log()
or(1 - eps).log()
becoming-inf
, which in turn leads to NaN model parameters after the update.https://github.com/pyg-team/pytorch_geometric/blob/44007acc55a28089f673aba9f830a00a773726ea/torch_geometric/explain/algorithm/pg_explainer.py#L237
Versions
PyG version: 2.3.0 PyTorch version: 2.0.1 OS: Ubuntu Server Python version: 3.10.14 How you installed PyTorch and PyG (conda, pip, source): conda