Open tanvidadu opened 5 years ago
I encounter the same issue, @DoodleJZ @tanvidadu Anyone has fixed this problem?
The dimensions do not match between d_model and the sum of d_tag, d_word and d_char if you concatenate all the embedding may be. You can check each part of the embedding dimension to find the problem easily.
@DoodleJZ, I do not use d_tag and d_char embedding~ I run this code with python 3.6 with pytorch 0.4.0, however,
Loading model from models/cwt.pt...
loading embedding: glove from data/glove.gz
oov: 18820
Reading dependency parsing data from data/ptb_test_3.3.0.sd
Loading test trees from data/23.auto.clean...
Loaded 2,416 test examples.
Parsing test sentences...
packed_len: 2501
sentences: 100
torch.Size([2501])
self.batch_size 100
self.max_len: 50
residual: torch.Size([2501, 1024])
v_padded: torch.Size([800, 50, 64])
outputs_padded: torch.Size([800, 50, 64])
outputs = outputs_padded[output_mask]: torch.Size([40000, 64])
d_v1: 32
outputs = self.combine_v(outputs): torch.Size([5000, 1024])
outputs = self.residual_dropout(outputs,batch_idxs): torch.Size([5000, 1024])
Traceback (most recent call last):
File "src_joint/main.py", line 746, in
Maybe you need to try pytorch >= 1.0.0, the error is output_mask whichl occurs when the version of pytorch is not matched.
@DoodleJZ Sorry to bother you. As I'm not familiar with PyTorch so much. Now, I just change torch_t.ByteTensor into torch_t.BoolTensor as follows. And now everything is perfect~ def pad_and_rearrange(....): invalid_mask = torch_t.BoolTensor(mb_size, len_padded)._fill(True)
@DoodleJZ �Sorry to bother you. As I'm not familiar with PyTorch so much. Now, I just change torch_t.ByteTensor into torch_t.BoolTensor as follows. And now everything is perfect~ def pad_and_rearrange(....): invalid_mask = torch_t.BoolTensor(mb_size, len_padded)._fill(True)
it works perfectly!
Hi @wujsAct, @CoyoteLeo, @tanvidadu. I am afraid that I have a similar problem:
I have two tensors that I want to add, one is a noise tensor of shape N x 1 x 64 x 64
, and the other tensor has the same shape.
Now things work fine until the very last batch index, it seems, when the programs stops and complains with the following error message: RuntimeError: The size of tensor a (96) must match the size of tensor b (128) at non-singleton dimension 0
.
Now, here is a bit of code:
` for batch_idx, (real_images, targets) in enumerate(train_loader):
noise_disc = -torch.rand(size = (batch_size, 1, 64, 64))/5
noise_disc = noise_disc.to(device)
real_images = real_images.to(device) # shape: (batch_size, 1, 64, 64)
images_disc = (real_images + noise_disc)
# move to device:
images_disc = images_disc.to(device)`
Unfortunately, I don't really understand why this error occurs, and I would appreciate help a lot!
Merry Christmas. :-)
Hey @DoodleJZ I came across this error while running your parser. Could please look into this and fix this ?
Traceback (most recent call last): File "/content/drive/My Drive/Hd/DependencyParser/src_joint/main.py", line 746, in
main()
File "/content/drive/My Drive/Hd/DependencyParser/src_joint/main.py", line 742, in main
args.callback(args)
File "/content/drive/My Drive/Hd/DependencyParser/src_joint/main.py", line 672, in runparse
syntree, = parser.parse_batch(subbatch_sentences)
File "/content/drive/My Drive/Hd/DependencyParser/src_joint/Zparser.py", line 1364, in parse_batch
extra_content_annotations=extra_content_annotations)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, *kwargs)
File "/content/drive/My Drive/Hd/DependencyParser/src_joint/Zparser.py", line 822, in forward
res, current_attns = attn(res, batch_idxs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(input, **kwargs)
File "/content/drive/My Drive/Hd/DependencyParser/src_joint/Zparser.py", line 344, in forward
return self.layer_norm(outputs + residual), attns_padded
RuntimeError: The size of tensor a (1100) must match the size of tensor b (695) at non-singleton dimension 0