Closed huangzizheng01 closed 1 year ago
Hi @YEARNLL,
Sry for the delayed reply and wish you a good vibe for the new year. Is your question resolved? The instruction to reproduce results are usually documented in README (including all hyperparameters), and it seems in https://github.com/Shen-Lab/GraphCL/issues/17#issuecomment-813764808 the MUTAG result is attainable.
Hi @YEARNLL,
Sry for the delayed reply and wish you a good vibe for the new year. Is your question resolved? The instruction to reproduce results are usually documented in README (including all hyperparameters), and it seems in #17 (comment) the MUTAG result is attainable.
Thanks for your reply. I am glad that the problem disappeared when I turn the exps from a CUDA10 machine to another CUDA11 one. It may be some package faults, so I closed this issue.
Now I still wonder that some augmentation settings, like DD and COLLAB. Are they all implemented under random4
? I haven't find the instructions in the paper and appendix.
Same best wishes to you!
In the unsupervised_TU sub repo, random2
, random3
, random4
should be implemented such that applicable to all datasets.
In the unsupervised_TU sub repo,
random2
,random3
,random4
should be implemented such that applicable to all datasets.
Thanks
Hi! Thanks for your great work. I have trouble reproducing the results in the Unsupervised_TU and Transfer Learning parts.
I didn't change any settings and just run the provided go.sh with MUTAG and random2, but the scores are significantly lower than that in the paper (about 0.79). When
log_interval
is turned to 2 from 10, the former two results (represent epoch 2 and 4, respectively) show consistency with Table 4.So I just wonder about the detail settings in your experiments, for example, all the datasets are implemented under the same parameters? like epochs, learning rate etc. Thanks.