Closed martozzz closed 5 years ago
Command (taking —seq-len as 2 for example): py train.py -d YAGO —gpu 0 —model 3 —dropout 0.5 —n-hidden 200 —lr 1e-3 —seq-len 2
Traceback: File “train.py” in train(args) File “train.py” in train loss = model.get_loss(batch_data, (s_hist, s_hist_t), (o_hist, o_hist_t), graph_dict) File “model.py” in getloss loss, , ,,_ = self.forward(triplets, s_hist, o_hist, graph_dict) File “model.py” in forward s_packed_input = self.aggregators_s(s_hist, s, r, self.ent_embeds, self.rel_embeds[:self.num_rels], graph_dict, reverse=False) File “module.py” in call result = self.forward(*input, **kwargs) File “Aggregator.py” in forward (embeds, ent_embeds[s_tem[i]].repeat(len(embeds), 1) RuntimeError: CUDA error: device-side assert triggered
Settings: Python 3.6.7 Pytorch 1.1.0 Cuda 9.0 DGL 0.3
Hello, you can preprocess first with seq-len 2. In get_history_graph.py file, set the 'history_len' to 2. Then you can run the code. Thanks!
Command (taking —seq-len as 2 for example): py train.py -d YAGO —gpu 0 —model 3 —dropout 0.5 —n-hidden 200 —lr 1e-3 —seq-len 2
Traceback: File “train.py” in
train(args)
File “train.py” in train
loss = model.get_loss(batch_data, (s_hist, s_hist_t), (o_hist, o_hist_t), graph_dict)
File “model.py” in getloss
loss, , ,,_ = self.forward(triplets, s_hist, o_hist, graph_dict)
File “model.py” in forward
s_packed_input = self.aggregators_s(s_hist, s, r, self.ent_embeds, self.rel_embeds[:self.num_rels], graph_dict, reverse=False)
File “module.py” in call
result = self.forward(*input, **kwargs)
File “Aggregator.py” in forward
(embeds, ent_embeds[s_tem[i]].repeat(len(embeds), 1)
RuntimeError: CUDA error: device-side assert triggered
Settings: Python 3.6.7 Pytorch 1.1.0 Cuda 9.0 DGL 0.3