migalkin / StarE

EMNLP 2020: Message Passing for Hyper-Relational Knowledge Graphs
MIT License
85 stars 17 forks source link

Training time of StarE model #6

Closed bys0318 closed 3 years ago

bys0318 commented 3 years ago

Hi! I tried to train a StarE model on WD50K dataset on a single GeForce RTX 3090 and it takes 12h for one epoch. Is it the case when the authors ran it or did I miss out on something (I didn't modify your code)? If this is really the case then training for 400 epochs as default setting suggests seems impossible for users.

migalkin commented 3 years ago

Hi, make sure you're starting the run.py script with the parameter DEVICE cuda to run everything on a GPU. I'd guess it's slow now because everything is run on a CPU

bys0318 commented 3 years ago

You are definitely right. Thanks!