Closed ngl567 closed 4 years ago
I got same error. Any solution? Thanks.
Hi guys, thank you for the interest. Did you guys run on GPUs? Also you should specify the GPU machine number, e.g., -gpu 0
Hi, I think https://github.com/INK-USC/RE-Net/blob/e28e41611a700368d45aa52191029f28120d3028/Aggregator.py#L97 "move_dgl_to_cuda(batched_graph)" when you call this line from "get_global_emb" function in global_model.py https://github.com/INK-USC/RE-Net/blob/e28e41611a700368d45aa52191029f28120d3028/global_model.py#L67 in line 67, after first loop you move first graph in "g_list" to cuda. But when second loop run, now len(g_list)= 2 but second element of g_list is not in cuda but in cpu. So dgl.batch tries to batch this two graph that are in different mode (first in gpu, second in cpu). I changed code a bit and now https://github.com/INK-USC/RE-Net/blob/e28e41611a700368d45aa52191029f28120d3028/Aggregator.py#L93 is:
for tim in timess:
move_dgl_to_cuda(graph_dict[tim.item()])
g_list.append(graph_dict[tim.item()])
You can see , before appending a graph to g_list, I move cuda. So problem solved. Thanks.
Hi, thanks for your comment! I have updated my code!
Thanks a lot for your suggestion.
Hello, I got an error when I run the source code pretrain.py. I hope you can help me to solve this problem. The error information is shown as following:
Hope for you help. Thank you.