Shen-Lab / GraphCL

[NeurIPS 2020] "Graph Contrastive Learning with Augmentations" by Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen
MIT License
541 stars 103 forks source link

Question about the changes of the similarity of positive pairs and negative pairs #45

Open scottshufe opened 2 years ago

scottshufe commented 2 years ago

Hi, @yyou1996. Thanks for your excellent work! I have a question about the similarity changes when I run the unsupervised_TU codes:

First, as I run gsimclr.py with COLLAB dataset, I find that the loss is gradually decreasing as we expected; Then, I try to print the pos_sim and the neg_sim values (the numerator and denominator of the loss, respectively) during the training process to observe their changes https://github.com/Shen-Lab/GraphCL/blob/1d43f79d7f33f8133f9d4b4b8254d8aaeb09a615/unsupervised_TU/gsimclr.py#L132 And I find that there is little change in the pos_sim (I expect to see a significant improvement in pos_sim), while the neg_sim is gradually decreasing. The output info is as follows: Epoch 1, Loss 461.42543468475344, p sim 94.16790313720703, n sim 2914.846399307251 Epoch 2, Loss 437.4408010482788, p sim 93.44444546699523, n sim 2210.465358352661 Epoch 3, Loss 415.9113293170929, p sim 94.00770704746246, n sim 1676.811231994629 Epoch 4, Loss 399.2864877343178, p sim 89.69291515350342, n sim 1463.0717622756958 Epoch 5, Loss 404.180169916153, p sim 91.31595058441162, n sim 1554.844750976562 Epoch 6, Loss 407.7608342051506, p sim 95.31525194644928, n sim 1615.2845653533936 Epoch 7, Loss 382.0407901287079, p sim 91.04753322601319, n sim 1260.7995797157287 Epoch 8, Loss 379.83556154966357, p sim 89.12307305335999, n sim 1163.8013469696045 ...

So my question is, whether the training process has been reducing the similarity between positive and negative pairs without increasing the similarity between positive pairs? If so, does the conclusion that GraphCL tries to improve the consistency between different graph views still hold?

Look forward to your reply! Thanks.