Closed zhoujs93 closed 4 years ago
@putsandcalls, could you please include the full stack trace of that error? I can't tell where it's coming from for certain with just the exception text. Thanks.
It would also be useful to have a sample input that causes the failure.
@brendan-ai2
I attached the sample input with the json configuration as follows, and the error log is as follows:
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\xbbns8w\AppData\Local\Continuum\anaconda3\envs\python36\Scripts\allennlp.exe\__main__.py", line 9, in <module>
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\run.py", line 18, in run
main(prog="allennlp")
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\commands\__init__.py", line 102, in main
args.func(args)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\commands\train.py", line 124, in train_model_from_args
args.cache_prefix)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\commands\train.py", line 168, in train_model_from_file
cache_directory, cache_prefix)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\commands\train.py", line 252, in train_model
metrics = trainer.train()
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\training\trainer.py", line 478, in train
train_metrics = self._train_epoch(epoch)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\training\trainer.py", line 320, in _train_epoch
loss = self.batch_loss(batch_group, for_training=True)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\training\trainer.py", line 261, in batch_loss
output_dict = self.model(**batch)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\models\basic_classifier.py", line 114, in forward
embedded_text = self._seq2seq_encoder(embedded_text, mask=mask)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\modules\seq2seq_encoders\pytorch_seq2seq_wrapper.py", line 83, in forward
self.sort_and_run_forward(self._module, inputs, mask, hidden_state)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\allennlp\modules\encoder_base.py", line 116, in sort_and_run_forward
module_output, final_states = module(packed_sequence_input, initial_states)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\rnn.py", line 557, in forward
return self.forward_packed(input, hx)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\rnn.py", line 550, in forward_packed
output, hidden = self.forward_impl(input, hx, batch_sizes, max_batch_size, sorted_indices)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\rnn.py", line 519, in forward_impl
self.check_forward_args(input, hx, batch_sizes)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\rnn.py", line 490, in check_forward_args
self.check_input(input, batch_sizes)
File "c:\users\xbbns8w\appdata\local\continuum\anaconda3\envs\python36\lib\site-packages\torch\nn\modules\rnn.py", line 149, in check_input
expected_input_dim, input.dim()))
RuntimeError: input must have 2 dimensions, got 3
Closing due to inactivity
OS: Windows Python version: 3.6.5 AllenNLP version: 0.8.5 PyTorch version: 1.1.0
Question:
I get the following error when running TextClassificationJsonReader with segment sentences = True.
RuntimeError: input must have 2 dimensions, got 3
The configuration for my model is as follows:
I know that the dataset_reader is correct based on a text, and that there is no issue with my data since it is similar to what is specified in allennlp/tests/data/dataset_readers/text_classification_json_test.py
So I am not sure what I should do. Am I specifying the text_field_embedder correctly ?
The goal is for me to build a model that can perform document classification based on the sentence embeddings since my document length are quite long (similar to the one mentioned here and referenced in the dataset_readers/text_classification_json documention: hierarchical-attention-networks) [https://www.cs.cmu.edu/~hovy/papers/16HLT-hierarchical-attention-networks.pdf]
Thanks