Closed ha-lins closed 3 years ago
I run the pretraining on 3090,and the number_works = 8.Besides,I didn't change the source code.
@check-777 Thanks for your helpful comments. Could you pls share the package version of pyG
and pytorch
? 3090 merely supports cuda 11.1
. Higher version of PyG
might leads to some inconsistent results with those in the paper as mentioned in README.md.
@check-777 Thanks for your helpful comments. Could you pls share the package version of
pyG
andpytorch
? 3090 merely supportscuda 11.1
. Higher version ofPyG
might leads to some inconsistent results with those in the paper as mentioned in README.md. That's true.And I fixed some bugs that casued by the pyG version.You can check out this url https://github.com/snap-stanford/pretrain-gnns/issues/14#issuecomment-647493335 That's my versions about pyG and pytorch: pytorch1.7.1 pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html pip install torch-geometric
Hi @yyou1996,
I wonder how to speed up the pretraining on chem data. How long did you use to pretrain one epoch on it? Which gpu version did you use? I think the speed bottleneck is the CPU or io. I tried increasing the
num_workers
while it seems no effects.