Open monajalal opened 6 years ago
Note that when I only add the parameter which model this is caused:
[jalal@goku AttentionTargetSentiment]$ python main.py -which-model contextualized
Loading data...
Parameters:
ATTENTION_SIZE=150
BATCH_SIZE=16
CLIP_NORM=None
CUDA=False
DEVICE=-1
DROPOUT_EMBED=0.2
DROPOUT_RNN=0.4
EMBED_DIM=200
EMBED_NUM=23095
EPOCHS=30
GETF1=False
GRAYSCALE=None
HIDDEN_SIZE=150
IF_RE=False
LABEL_NUM=3
LOG_INTERVAL=1
LR=0.01
LR_SCHEDULER=None
MAX_NORM=None
MESSAGE=tt
NEED_SMALLEMBED=False
SAVE_DIR=snapshot/tt
SAVE_INTERVAL=100
SHUFFLE=True
SNAPSHOT=None
TEST=False
TEST_INTERVAL=100
USE_EMBEDDING=True
WEIGHT_DECAY=1e-06
WHICH_DATA=Z
WHICH_EMBEDDING=200d
WHICH_INIT=xavier
WHICH_MODEL=contextualized
WHICH_OPTIM=Adagrad
第 1 次迭代
Traceback (most recent call last):
File "main.py", line 121, in <module>
train.train(args, m_model, train_data.iterator, test_data.iterator)
File "/scratch2/debate_tweets/sentiment/AttentionTargetSentiment/train.py", line 56, in train
logit = model(feature, batch.target_start, batch.target_end)
File "/scratch/sjn/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/scratch2/debate_tweets/sentiment/AttentionTargetSentiment/model/contextualized.py", line 90, in forward
s = self.attention(x, average_target)
File "/scratch/sjn/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/scratch2/debate_tweets/sentiment/AttentionTargetSentiment/model/attention.py", line 22, in forward
m_combine = F.tanh(self.linear(m_combine))
File "/scratch/sjn/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/scratch/sjn/anaconda/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
return F.linear(input, self.weight, self.bias)
File "/scratch/sjn/anaconda/lib/python3.6/site-packages/torch/nn/functional.py", line 837, in linear
output = input.matmul(weight.t())
File "/scratch/sjn/anaconda/lib/python3.6/site-packages/torch/autograd/variable.py", line 386, in matmul
return torch.matmul(self, other)
File "/scratch/sjn/anaconda/lib/python3.6/site-packages/torch/functional.py", line 192, in matmul
output = torch.mm(tensor1, tensor2)
RuntimeError: size mismatch, m1: [1504 x 300], m2: [600 x 150] at /pytorch/torch/lib/TH/generic/THTensorMath.c:1434
[jalal@goku AttentionTargetSentiment]$ python main.py
Loading data...
Parameters:
ATTENTION_SIZE=150
BATCH_SIZE=16
CLIP_NORM=None
CUDA=False
DEVICE=-1
DROPOUT_EMBED=0.2
DROPOUT_RNN=0.4
EMBED_DIM=200
EMBED_NUM=23095
EPOCHS=30
GETF1=False
GRAYSCALE=None
HIDDEN_SIZE=150
IF_RE=False
LABEL_NUM=3
LOG_INTERVAL=1
LR=0.01
LR_SCHEDULER=None
MAX_NORM=None
MESSAGE=tt
NEED_SMALLEMBED=False
SAVE_DIR=snapshot/tt
SAVE_INTERVAL=100
SHUFFLE=True
SNAPSHOT=None
TEST=False
TEST_INTERVAL=100
USE_EMBEDDING=True
WEIGHT_DECAY=1e-06
WHICH_DATA=Z
WHICH_EMBEDDING=200d
WHICH_INIT=xavier
WHICH_MODEL=vanilla
WHICH_OPTIM=Adagrad
第 1 次迭代
/scratch2/debate_tweets/sentiment/AttentionTargetSentiment/model/vanilla.py:82: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
alfa = F.softmax(beta)
Without adding contextualized as a parameter to which model it works fine but how can I add contextualized as a parameter to which model without getting the error above?
parameter: BATCH_SIZE=1
Do you know how to fix the below error?