DeepGraphLearning / RNNLogic

120 stars 25 forks source link

slow training with GPU #9

Open nitishajain opened 2 years ago

nitishajain commented 2 years ago

Hello,

Thank you for providing the code of your paper. As per the instructions, I am running the code for Version 2 of RNNLogic with emb. While the training is running as expected, it is very slow for both wn18rr and FB15K-237 datasets on my GPU server. Could you inform about your experimental setup for these experiments in terms of the underlying hardware and the expected run times? I could estimate the running times for my setup from this information.

Thanks!

chenxran commented 2 years ago

Hello, I am facing the same problem when trying re-implementing RNNLogic using the code in the main branch. I found that using multiprocessing package to concurrently train the model for each relation cannot speed up since a single process will cost almost 50% of my CPU (Intel Xeon Gold 5220). Did you face the same problem? Approximately how long did you cost to train on FB15k-237 or other much smaller datasets like umls/kinship?

mnqu commented 2 years ago

Thanks for your interest, and very sorry for the late response. We have refactored the codes, and the new codes are in the folder RNNLogic+, which are more readable and easier to run. You might be interested. Thanks!