divelab / DIG

A library for graph deep learning research
https://diveintographs.readthedocs.io/
GNU General Public License v3.0
1.84k stars 281 forks source link

GNN GI Explainer Raises RuntimeError: CUDA out of memory. #126

Closed matekenya closed 2 years ago

matekenya commented 2 years ago

When using the GNN_GI explainer for the graph classification explanation, when the following cells are run, a RuntimeError is raised. Is there a workaround for this?

dataset = MoleculeDataset(root='MoleculeDataset', name='mutag', transform=None) x = dataset.data.x.to(torch.float32).to(device) edge_index = dataset.data.edge_index.to(device) num_classes = dataset.num_classes

from dig.xgraph.models import GCN_2l model = GCN_2l(model_level='graph', dim_node=num_node_features, dim_hidden=64, num_classes=num_classes)

from dig.xgraph.method import GNN_GI explainer = GNN_GI(model, explain_graph=True) explainer(x, edge_index, num_classes=num_classes)

image

CM-BF commented 2 years ago

Hi, this problem is generally caused by explaining huge graphs. GNN_GI is a walk-based explainer which can cause large GPU mem consuming. I suggest trying to explain small graphs or resorting to other explainers.

matekenya commented 2 years ago

Thanks, this is good information.