Closed bhcsayx closed 5 years ago
Hi @bhcsayx , thank you very much for your interest in our research! I think your phenomenon is normal. Actually our code ran about two weeks to get the result. The reason for the slower running time is that for each head, the model will iterate over every tail and every relation to check whether a valid triple exists. Therefore, time complexity is very high.
Feel free to ask me if you have any further questions. @bhcsayx
Thank you! Maybe I should consider getting access to some better GPU to run it…
Maybe you can reduce the test sample a little bit or run my paper writing directly.
But it seems that if I want to test that model I need to reduce the number significantly… I 'll try to run paper writing, anyway thanks again for your effort.
Hi, thanks for your excellent work, there is one question making me confused: after I trained link prediction model, I tried to run test.py in the Existing_model_reading folder, but I found it runs extremely slow that it only runs about 6000 items in test2id.txt after a week... and it also costs lots of RAM, about 200G when I discovered the problem. I wonder do you have any ideas on where the problem possibly lies? My GPU configuration shows as following picture, and I ran test.py as readme shows, except for I added nohup before the command to run in the background. Thank you very much again!