Closed nareto closed 4 years ago
I have the same question.
I think I may have found the crux of the issue, it's the lines in utils.py
that call dgl.batch
(in two places, around line 238 and 278).
Right before that line I added
g_list = [g.to(torch.device('cuda:0')) for g in g_list]
In a few hours I'll know if it worked
Thanks a lot! It works!
Hello, I'm having trouble running the training on nvidia GPU. I always get the same error regarding the tensors not all being on the same device (cpu and gpu). I saw there were already similar issues, so I pulled in the changes two hours ago and retried, still same problem when running
train.py
(on YAGO dataset, but same results with WIKI).Python is 3.6; with pytorch 1.4.0 (CUDA 10.1) I get
with pytorch 1.5.1 (CUDA 10.1) at the same exact line in
utils.py