Open ghost opened 6 years ago
Hi,
happy holiday :-)
It should be --word_dim 200
instead of --embedding_dim 200
.
Best,
Lucas
oh, sorry i didn't notice that you are using train_w instead of train_wc. Actually, for train_w, i'm not quite sure what causes the problem. would look into it later (if i find some time).
Hi, Thank you,
Happy holiday :-)
Sahar
I also find the same error "dimension specified as 0 but tensor has no dimensions" using torch-0.3.0. The author used torch-0.2.0 as said in requirement. I guess this is the problem. I haven't try torch-0.2.0 yet.
@saharghannay amd @jerryitp were you guys able to resolve the error?
no, not yet
Also got the same issue here. No idea what caused it.
But I did try installing a previous version of PyTorch(0.2.0.post3), the error was avoided.
On Pytorch 0.3.1.post2
With tensors works fine:
a = torch.FloatTensor()
b = torch.rand(2,3)
a=torch.cat([a.clone(),b])
With Variables ERROR:
a = torch.autograd.Variable(torch.FloatTensor(), requires_grad=True)
b = torch.autograd.Variable(torch.rand(2,3), requires_grad=True)
a=torch.cat([a.clone(),b])
RuntimeError: dimension specified as 0 but tensor has no dimensions
Hi, I was trying to run the train_w.py on ner conll2003 data using this command: python train_w.py --train_file ner-conll2003/train --dev_file ner-conll2003/dev --testfile ner-conll2003/test --checkpoint ./checkpoint/ner --caseless --fine_tune --emb_file Glove5g_200.txt --embedding_dim 200 --gpu 1
But I got this error:
Traceback (most recent call last):
loss.backward()
File "../py3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File ".../py3/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: dimension specified as 0 but tensor has no dimensions
File "train_w.py", line 189, in
Did you have an idea how can I fix this problem please? Thank you