Sequence Generation Model for Multi-label Classification (COLING 2018)
432
stars
112
forks
source link
RuntimeError: While copying the parameter named decoder.embedding.weight, whose dimensions in the model are torch.Size([58, 256]) and whose dimensions in the checkpoint are torch.Size([107, 256]). #3
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 482, in load_state_dict
ownstate[name].copy(param)
RuntimeError: invalid argument 2: sizes do not match at /pytorch/torch/lib/THC/generic/THCTensorCopy.c:101
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "predict.py", line 91, in
model.load_state_dict(checkpoints['model'])
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 487, in load_state_dict
.format(name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named decoder.embedding.weight, whose dimensions in the model are torch.Size([58, 256]) and whose dimensions in the checkpoint are torch.Size([107, 256]).
mldl@ub1604:~/ub16_prj/SGM$
This is because the number of labels does not match the size of the target vocabulary in your checkpoint. You can carefully check the number of labels and the size of the target vocabulary in the checkpoint.
mldl@ub1604:~/ub16_prj/SGM$ python3 predict.py -gpus 0 -log log_name loading checkpoint...
loading data...
loading time cost: 3.669 building model...
Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 482, in load_state_dict ownstate[name].copy(param) RuntimeError: invalid argument 2: sizes do not match at /pytorch/torch/lib/THC/generic/THCTensorCopy.c:101
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "predict.py", line 91, in
model.load_state_dict(checkpoints['model'])
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 487, in load_state_dict
.format(name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named decoder.embedding.weight, whose dimensions in the model are torch.Size([58, 256]) and whose dimensions in the checkpoint are torch.Size([107, 256]).
mldl@ub1604:~/ub16_prj/SGM$