Shen-Lab / GraphCL

[NeurIPS 2020] "Graph Contrastive Learning with Augmentations" by Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen
MIT License
547 stars 103 forks source link

Results about unsupervised_TU experiments #10

Closed lihy96 closed 3 years ago

lihy96 commented 3 years ago

Hi, @yyou1996

When I run the code of unsupervised_TU experiments with a fixed random seed (e.g., 0), the outputs, including loss, accuracy, etc., may be different every time.

How about your opinion on the issue? Thanks a lot!

yyou1996 commented 3 years ago

Hi @lihy96,

The seed configuration code snippet https://github.com/Shen-Lab/GraphCL/blob/60969f0ae3574d26309d2a60fd5e816f5f9e666d/unsupervised_TU/gsimclr.py#L139 is my commonly used one and it works in my other code (exact the same randomness each time) but might not in unsupervised_TU. Thus, there should be some other lib randomness we didn't notice, and one of my conjecture might be related to torch_geometric lib. What is your opinion?

To address it, multiple run with mean & std reported is performed in our paper.

lihy96 commented 3 years ago

Hi, @yyou1996

Thanks for your reply. I run the code (unsupervised_TU) after setting a fixed random seed, and print the loss. I find that the loss of first few epochs is nearly equal among several runnings, but can be very different after some epochs, leading to different accuracy on evaluation finally. Can “accumulative error” be a possible reason for this issue?

yyou1996 commented 3 years ago

I am not sure on that. If that is an issue, maybe training for longer time can mitigate it?

lihy96 commented 3 years ago

I will try it. Thank you for your suggestions.