RingBDStack / SUGAR

Code for "SUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism"
53 stars 6 forks source link

Is there any ablation about the two losses? #1

Closed ha-lins closed 3 years ago

ha-lins commented 3 years ago

Hi @Suchun-sv @SunQingYun1996,

Thanks for the great work! I wonder if there are any ablation studies such as: SUGAR w/o supervised loss or SUGAR w/o self-supervised contrastive loss or SUGAR w/o reinforcement pooling. I think it will validate the effects of each module better. I'm studying unsupervised graph representation learning and the effect of SUGAR w/o supervised loss could inspire me in some way.

Thanks!

SunQingYun1996 commented 3 years ago

Thank you for your interest. For the ablation study of self-supervised contrastive loss, we show the results with different self-supervised loss coefficients in Figure 4 (f), where the coefficient beta 0 represents the performance of SUGAR without self-supervised contrastive loss. For the ablation study of reinforcement pooling, we show the results in Figure 4 (a), where SUGAR-FixedK represents SUGAR without reinforcement pooling.

ha-lins commented 3 years ago

Thanks for your reply. Now I understand SUGAR is mainly a supervised learning approach. Btw, I think it would be interesting to see how SUGAR performs in the completely unsupervised scenario (e.g., with just the MI loss). Do you have any intuitions about such performance? @SunQingYun1996 Thanks!

SunQingYun1996 commented 3 years ago

Sorry for the late reply. The main intuition of SUGAR is representing the graph as striking and label-relevant subgraphs. I don't think the subgraph selection mechanism can perform well in the completely unsupervised scenario.

ha-lins commented 3 years ago

Thanks for the reply!