Closed lizsmile closed 3 years ago
Hello lizsmile, I didn't use GPU to test this program before. Since the GKT method is not proposed by us, I only try my best to implement it. I think this method utilizes 3D matrices in the intermediate computation steps, which would make the program run slow.
So my advice is that if you want to use GKT as a GNN-related baseline, you can test it on small datasets and it's unnecessary to use all of the data for comparison. Another way is to mark it as 'OOM'(out of memory) in your paper and explain why this approach is memory consuming.
If you really want to run this program on large-scale datasets, you can try to optimize my code on the GPU. I have made detailed comments in the code, feel free to contact with me if you have any questions.
Thank you for your reply. I will try to optimize it on the GPU. I am curious about its performance on a larger dataset.
Ok, good luck! If you have any other questions later, you can open a new issue.
Hi, this code is beautiful, but it runs so slow on my Nvidia GTX 2080 Ti, taking 248 seconds for one batch (on assist2009, batch size = 128). And once I use dataset with larger num of skills, the program corrupted due to lack of GPU memory. So I wonder what kind of GPU do you use to run this model and how long does it take to train?