divelab / DIG

A library for graph deep learning research
https://diveintographs.readthedocs.io/
GNU General Public License v3.0
1.84k stars 281 forks source link

PGExplainer on BA2motifs #138

Closed alirezadizaji closed 2 years ago

alirezadizaji commented 2 years ago

Hi, Thanks for sharing the library, I was trying to train pg explainer within the benchmark on ba2motifs and I was printing the accuracy of graph classification using the provided edge mask and I got 50% accuracy as it always underfits during training (always provides poor explanation for samples with house motif). I have tried various configurations but nothing has changed so far. Any help would be appreciated.

Oceanusity commented 2 years ago

Hello, could you provide the GCN and PGExplainer model configurations? In addition, have you tried other explanation methods for this GCN model and what's their results?

alirezadizaji commented 2 years ago

GCN: GIN_3l(model_level='graph', dim_node=10, dim_hidden=300, num_classes=2) Explainer: PGExplainer(model, in_channels=600, device=device, explain_graph=True, epochs=num_epochs, lr=3e-3, coff_size=0.03, coff_ent=5e-4, t0=5.0, t1=1.0, sample_bias=0.0) Yes I have tried subgraphx, gradcam and gnnexplainer and there was no major problem, though pgexplainer masks out all edges in both house and circle-motif instances.

Oceanusity commented 2 years ago

Hello, I am unsure about this problem. Due to the situation all the edges are masked out, it seems like the scores for all the edges are similar or the same. Therefore, I recommend you try more hyperparameters for better results although you might have already tried.

There is another paper about the reproducibility of PGExplainer on BA-2Motifs. You might find a similar situation to yours in this paper.

Oceanusity commented 2 years ago

If you want to finetune these hyperparameters, I recommend several sets of hyperparameters here. { " lr":{"_type":"choice","_value":[0.005,0.003,0.001]}, "coff_size":{"_type":"choice","_value":[1.0,0.5,0.1,0.05]}, "coff_ent":{"_type":"choice","_value":[5e-4,1e-4,5e-5,1e-5,0]} } And coff_size is the most important hyper-parameters. In addition, different num_epochs can lead to different results.