Shen-Lab / GraphCL

[NeurIPS 2020] "Graph Contrastive Learning with Augmentations" by Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen
MIT License
541 stars 103 forks source link

How to speed up the pretraining on chem in Transferlearning experiment? #25

Closed ha-lins closed 3 years ago

ha-lins commented 3 years ago

Hi @yyou1996,

I wonder how to speed up the pretraining on chem data. How long did you use to pretrain one epoch on it? Which gpu version did you use? I think the speed bottleneck is the CPU or io. I tried increasing the num_workers while it seems no effects.

check-777 commented 3 years ago

I run the pretraining on 3090,and the number_works = 8.Besides,I didn't change the source code.

ha-lins commented 3 years ago

@check-777 Thanks for your helpful comments. Could you pls share the package version of pyG and pytorch? 3090 merely supports cuda 11.1. Higher version of PyG might leads to some inconsistent results with those in the paper as mentioned in README.md.

check-777 commented 3 years ago

@check-777 Thanks for your helpful comments. Could you pls share the package version of pyG and pytorch? 3090 merely supports cuda 11.1. Higher version of PyG might leads to some inconsistent results with those in the paper as mentioned in README.md. That's true.And I fixed some bugs that casued by the pyG version.You can check out this url https://github.com/snap-stanford/pretrain-gnns/issues/14#issuecomment-647493335 That's my versions about pyG and pytorch: pytorch1.7.1 pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-geometric

ha-lins commented 3 years ago

@check-777 Thanks for your help. I wonder if this problem is fixed and reproduce the pretraining performance successfully. Are the downstream performance close to the paper results for the latest version of pyG?