awslabs / dgl-ke

High performance, easy-to-use, and scalable package for learning large-scale knowledge graph embeddings.
https://dglke.dgl.ai/doc/
Apache License 2.0
1.28k stars 196 forks source link

Multi-gpu training is not effective on specific cases #272

Open dnjsdos opened 1 year ago

dnjsdos commented 1 year ago

In case of RESCAL model with wn18 dataset, There is no training speed improvement when I trained using multi-gpu compared to 1-gpu training. Even, It is much slower than 1-gpu training.

Below this description, I write the command line I used. And benchmark table in document(https://dglke.dgl.ai/doc/benchmarks.html), RESCAL dosen't exist on 8-gpu training with wn18 dataset. Is there any special reason? Or, did I miss something? Please let me know.

dglke_train --model_name RESCAL --dataset wn18 --batch_size 1024 --log_interval 1000 \
--neg_sample_size 256 --hidden_dim 250 --gamma 24.0 --lr 0.03 --batch_size_eval 16 \
--test -adv --gpu 0 1 --max_step 10000 --mix_cpu_gpu --num_proc 2 --async_update --force_sync_interval 1000