source code for 'Improving automatic source code summarization via deep reinforcement learning'
77
stars
30
forks
source link
Error while training the Hybrid Model: Function CatBackward returned an invalid gradient at index 1 - got [85, 1, 512] but expected shape compatible with [57, 1, 512] failed. #5
number of parameters: 92592823
opt.eval: False
opt.eval_sample: False
supervised_data.src: 54426
supervised_data.tgt: 54426
supervised_data.trees: 54426
supervised_data.leafs: 54426
supervised training..
start_epoch: 1
XENT epoch *
Model optim lr: 0.001
<class 'lib.data.Dataset.Dataset'> 54426
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1351: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1340: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
/content/drive/My Drive/notebooks/Python_method_name_prediction/code_summarization_public/lib/model/HybridAttention.py:34: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
attn_tree = self.sm(attn_tree)
/content/drive/My Drive/notebooks/Python_method_name_prediction/code_summarization_public/lib/model/HybridAttention.py:36: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
attn_txt = self.sm(attn_txt)
outputs: torch.Size([26, 32, 512])
/content/drive/My Drive/notebooks/Python_method_name_prediction/code_summarization_public/lib/metric/Loss.py:8: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
log_dist = F.log_softmax(logits)
loss value: 3042.23095703125
---else---
torch.Size([26, 32, 512])
torch.Size([26, 32, 512])
Traceback (most recent call last):
File "a2c-train.py", line 339, in
main()
File "a2c-train.py", line 321, in main
xent_trainer.train(opt.start_epoch, opt.start_reinforce - 1, start_time)
File "/content/drive/My Drive/notebooks/Python_method_name_prediction/code_summarization_public/lib/train/Trainer.py", line 30, in train
train_loss = self.train_epoch(epoch)
File "/content/drive/My Drive/notebooks/Python_method_name_prediction/code_summarization_public/lib/train/Trainer.py", line 85, in train_epoch
loss = self.model.backward(outputs, targets, weights, num_words, self.loss_func)
File "/content/drive/My Drive/notebooks/Python_method_name_prediction/code_summarization_public/lib/model/EncoderDecoder.py", line 547, in backward
outputs.backward(grad_output)
File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function CatBackward returned an invalid gradient at index 1 - got [85, 1, 512] but expected shape compatible with [57, 1, 512]
failed.
### Run time Log: python a2c-train.py -data dataset/train/processed_all.train.pt -save_dir dataset//result/ -embedding_w2v dataset/train/ -start_reinforce 10 -end_epoch 30 -critic_pretrain_epochs 10 -data_type hybrid -has_attn 1 -gpus 0 Start...