Closed ha-lins closed 3 years ago
Thank you for your interest. For the ablation study of self-supervised contrastive loss, we show the results with different self-supervised loss coefficients in Figure 4 (f), where the coefficient beta 0 represents the performance of SUGAR without self-supervised contrastive loss. For the ablation study of reinforcement pooling, we show the results in Figure 4 (a), where SUGAR-FixedK represents SUGAR without reinforcement pooling.
Thanks for your reply. Now I understand SUGAR is mainly a supervised learning approach. Btw, I think it would be interesting to see how SUGAR performs in the completely unsupervised scenario (e.g., with just the MI loss). Do you have any intuitions about such performance? @SunQingYun1996 Thanks!
Sorry for the late reply. The main intuition of SUGAR is representing the graph as striking and label-relevant subgraphs. I don't think the subgraph selection mechanism can perform well in the completely unsupervised scenario.
Thanks for the reply!
Hi @Suchun-sv @SunQingYun1996,
Thanks for the great work! I wonder if there are any ablation studies such as:
SUGAR w/o supervised loss
orSUGAR w/o self-supervised contrastive loss
orSUGAR w/o reinforcement pooling
. I think it will validate the effects of each module better. I'm studying unsupervised graph representation learning and the effect ofSUGAR w/o supervised loss
could inspire me in some way.Thanks!