Shen-Lab / GraphCL

[NeurIPS 2020] "Graph Contrastive Learning with Augmentations" by Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen
MIT License
541 stars 103 forks source link

Question about Unsupervised_TU #57

Closed junkangwu closed 1 year ago

junkangwu commented 1 year ago

@yyou1996 hi, yuning, May I ask you about details about experiments? In readme, your said $GPU_ID is the lanched GPU ID and $AUGMENTATION could be random2, random3, random4 that sampling from {NodeDrop, Subgraph}, {NodeDrop, Subgraph, EdgePert} and {NodeDrop, Subgraph, EdgePert, AttrMask}, seperately. So the result in paper leverages random2 to random4 repeatly as multiple run with mean & std reported is performed in your paper?

yyou1996 commented 1 year ago

Hi @junkangwu,

Please refer to sec. 4.3 summary https://proceedings.neurips.cc/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-Paper.pdf where we determine rules of thumb for aug selections.

junkangwu commented 1 year ago

@yyou1996 , Thanks a lot for your explanations.

As the setting in Unsupervised_TU, GraphCL adopts the above rules of thumb for aug selections and multiple run with mean & std at 20th epoch are reported. I understand right?

junkangwu commented 1 year ago

Hi @yyou1996, I reproduce GraphCL on an unsupervised setting on MUTAG where aug adopts random2 ( node dropping and subgraph for biochemical molecules). However, the final results are extremely higher than that on paper. (88.26+-1.76 vs 86.80±1.34). Is it real or does some issue exist?

yyou1996 commented 1 year ago

Hi @junkangwu,

Sry for the delay. I come and check in on a weekly base. For Q1 yes you understand correctly. For Q2, I reply to you in the email and post here for others' interests. I would say it is possible since 88.26 is still within std of 86.80+-1.34; more importantly, MUTAG is nearly the smallest dataset that could suffer from unstable results.