snap-stanford / ogb

Benchmark datasets, data loaders, and evaluators for graph machine learning
https://ogb.stanford.edu
MIT License
1.89k stars 398 forks source link

gnn training stuck #155

Closed CnBDM-Su closed 3 years ago

CnBDM-Su commented 3 years ago

image when I trained gnn baseline, it got stuck more than 20min after displaying the parameter , is it normal? Thanks

weihua916 commented 3 years ago

Hi! See https://github.com/snap-stanford/ogb/issues/131 for the potential solution. Your hard disk needs to be fast, or you need to put all the features into CPU memory.

CnBDM-Su commented 3 years ago

it works, Thanks. One more question, how can I use multi-gpu for training, when I change the trainer setting, like gpus=[0,1,2,3,4,5,6,7]. It will throw a error like: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7] Traceback (most recent call last): File "rgnn.py", line 416, in trainer.fit(model, datamodule=datamodule) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit self.dispatch() File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch self.accelerator.start_training(self) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training self.training_type_plugin.start_training(trainer) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 108, in start_training mp.spawn(self.new_process, **self.mp_spawn_kwargs) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 179, in start_processes process.start() File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/multiprocessing/process.py", line 112, in start self._popen = self._Popen(self) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init self._launch(process_obj) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/user/miniconda/envs/py36/envs/python37/lib/python3.7/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) MemoryError

rusty1s commented 3 years ago

In case you load all node features directly into memory, it might want to make use of shared memory to ensure that different processes do not need to replicate the data.

CnBDM-Su commented 3 years ago

Thanks