Collate fn uses self._elmo to check the use of elmo, but the cached Processor's _elmo is set to False.
(Cache was saved when processing SA+Elmo)
Error log is as follows:
$ python main.py baseline --cuda --mode embed_question --iteration 501 --test_path $SQUAD_DEV_QUESTION_PATH --elmo --num_heads 2 --batch_size 32 --cache
...
'train_path': '/home/jinhyuk/data/squad/train-v1.1.json',
'train_steps': 0,
'word_vocab_size': 10000}
Model loaded from /tmp/piqa/squad/save/501/model.pt
Saving embeddings
Traceback (most recent call last):
File "main.py", line 277, in <module>
main()
File "main.py", line 258, in main
embed(args)
File "main.py", line 240, in embed
question_output = model.get_question(**test_batch)
File "/home/jinhyuk/github/piqa/squad/baseline/model.py", line 285, in get_question
q = self.question_embedding(question_char_idxs, question_glove_idxs, question_word_idxs, ex=question_elmo_idxs)
File "/home/jinhyuk/anaconda3/envs/p3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/jinhyuk/github/piqa/squad/baseline/model.py", line 98, in forward
elmo, = self.elmo(ex)['elmo_representations']
File "/home/jinhyuk/anaconda3/envs/p3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/jinhyuk/anaconda3/envs/p3.6/lib/python3.6/site-packages/allennlp/modules/elmo.py", line 133, in forward
original_shape = inputs.size()
AttributeError: 'NoneType' object has no attribute 'size'
https://github.com/uwnlp/piqa/blob/3a3404d82bf61a07241035eaf64be10233e266dd/squad/baseline/processor.py#L215-L237
Collate fn uses self._elmo to check the use of elmo, but the cached Processor's _elmo is set to False. (Cache was saved when processing SA+Elmo)
Error log is as follows: