SJTU-DMTai / SUNNY-GNN

The official implementation of AAAI'24 paper: Self-Interpretable Graph Learning with Sufficient and Necessary Explanations.
GNU Affero General Public License v3.0
9 stars 1 forks source link

Test-time Bernoulli sampling #1

Open steveazzolin opened 2 months ago

steveazzolin commented 2 months ago

Dear authors,

I wonder how the test-time sampling from Bernoulli (as reported in Explanation Generation section, page 11751) is actually implemented. Checking the code (here), I just see that during training a hard Gumbel distribution is used, while at test time a simple sigmoid is employed. Hence, I do not understand where the Bernoulli sampling, and where the the number of sampled edges (according to parameter k) are implemented.

Thanks for the clarification, Steve

JialeDeng commented 1 month ago

Dear Steve,

Thanks a lot for your attention to our work. In the test time, the explainer outputs the importance scores for edges in the graph, and Bernoulli sampling can be performed based on the importance scores. Parameter k (num of sampled edges) should be set according to the specific sparsity requirements. You can find similar implementations in GSAT and PGExplainer. If you use dgl, you may use the subgraph extraction ops to construct a new explainable subgraph. If you use pyg, you may find this code helpful. Please let us know if you have following questions.

Best wishes, Jiale

steveazzolin commented 1 month ago

Thanks for your answer, and thank you for pointing me to the references which are definitely helpful. However, this does not fully address my question, as I don't find where this sampling is performed in your actual implementation.

Could you please point me to which part of the close this is actually applied? Thank you again.

Cheers, Steve