INK-USC / RE-Net

Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs (EMNLP 2020)
http://inklab.usc.edu/renet/
435 stars 95 forks source link

Environment issues #60

Open Harley-ZP opened 2 years ago

Harley-ZP commented 2 years ago

I am using window 10 OS, and I followed the markdown instruction, installed packages including dgl-cuda 10.1, but showing up 'no module named dgl' while executing "import dgl", it seems that the dgl-cuda could not be correctly imported.

Any one facing same problem?

Chinese version:

按.md装了包之后dgl cuda没法正常被import.. import dgl报错。

MrLiuCC commented 2 years ago

应该是环境没有配好

MrLiuCC commented 2 years ago

I use following codes to create environments successfully: ` conda create -n renet python=3.6 numpy conda activate renet conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c python conda install -c dglteam dgl-cuda10.1 conda install scikit-learn

`

Harley-ZP commented 2 years ago

Thanks a lot! I may try this solution later !

---Original--- From: @.> Date: Thu, Mar 31, 2022 14:22 PM To: @.>; Cc: @.**@.>; Subject: Re: [INK-USC/RE-Net] Environment issues (Issue #60)

I use following codes to create environments successfully: ` conda create -n renet python=3.6 numpy conda activate renet conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c python conda install -c dglteam dgl-cuda10.1 conda install scikit-learn

`

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

ZhengJialin1000 commented 2 years ago

报错 E:\anaconda3\lib\site-packages\torch_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ..\aten\sr c\ATen\native\BinaryOps.cpp:467.) return torch.floor_divide(self, other) Traceback (most recent call last): File "pretrain.py", line 139, in train(args) File "pretrain.py", line 83, in train loss = model(batch_data, true_s, true_o, graph_dict) File "E:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, *kwargs) File "F:\code\RE-Net-master\global_model.py", line 47, in forward packed_input = self.aggregator(sorted_t, self.ent_embeds, graph_dict, reverse=reverse) File "E:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(input, **kwargs) File "F:\code\RE-Net-master\Aggregator.py", line 55, in forward batched_graph.ndata['h'] = ent_embeds[batched_graph.ndata['id']].view(-1, ent_embeds.shape[1]) File "E:\anaconda3\lib\site-packages\dgl\view.py", line 84, in setitem self._graph._set_n_repr(self._ntid, self._nodes, {key : val}) File "E:\anaconda3\lib\site-packages\dgl\heterograph.py", line 4124, in _set_n_repr ' same device.'.format(key, F.context(val), self.device)) dgl._ffi.base.DGLError: Cannot assign node feature "h" on device cuda:0 to a graph on device cpu. Call DGLGraph.to() to copy the graph to the same device. 在aggregator 55行前添加了batched_graph = dgl.batch(g_list)改为 local variable 'batch_data' referenced before assignment 请问应该怎么修改,应该是dgl版本问题