ChunchuanLv / AMR_AS_GRAPH_PREDICTION

53 stars 16 forks source link

TypeError: 'int' object is not subscriptable for AMRProcessors when running pre-trained model #6

Closed logan-siyao-peng closed 5 years ago

logan-siyao-peng commented 5 years ago

Hi Chunchuan, I would like to parse some text documents using your state-of-the-art pretrained model. I am running on a Mac (without GPU support) using the following command:

python3 src/parse.py -train_from gpus_0valida_best.pt -text "I like dogs and cats." -gpus -1

However, there seems to be an error with the dimension of root_id (line 676 of parser/AMRProcessors.py). root_id has the value tensor(0) and thus when converted to_list() we get integer 0 (instead of a list).

I am not sure if this is due to dimension warnings like: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.

Thank you for your help.

Traceback (most recent call last): File "src/parse.py", line 52, in output = Parser.parse_batch([line.strip()]) File "/Users/loganpeng/Dropbox/Fall_2018/LING672_Adv_semantic_interpret/final_project/data_parsers/AMR_AS_GRAPH_PREDICTION/parser/AMRProcessors.py", line 210, in parse_batch graphs,rel_triples = self.decoder.relProbAndConToGraph(concept_batches,rel_prob,roots,(dependent_mark_batch,aligns_raw),True,True) File "/Users/loganpeng/Dropbox/Fall_2018/LING672_Adv_semantic_interpret/final_project/data_parsers/AMR_AS_GRAPH_PREDICTION/parser/AMRProcessors.py", line 680, in relProbAndConToGraph root_id = root_id.data.tolist()[0] TypeError: 'int' object is not subscriptable

ChunchuanLv commented 5 years ago

Hi,

Probably you are not using pytorch 0.2 .

Chunchuan

On Thu, 13 Dec 2018, 00:26 Siyao (Logan) Peng <notifications@github.com wrote:

Hi Chunchuan, I would like to parse some text documents using your state-of-the-art pretrained model. I am running on a Mac (without GPU support) using the following command:

python3 src/parse.py -train_from gpus_0valida_best.pt -text "I like dogs and cats." -gpus -1

However, there seems to be an error with the dimension of root_id (line 676 of parser/AMRProcessors.py). root_id has the value tensor(0) and thus when converted to_list() we get integer 0 (instead of a list).

I am not sure if this is due to dimension warnings like: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.

Thank you for your help.

Traceback (most recent call last): File "src/parse.py", line 52, in output = Parser.parse_batch([line.strip()]) File "/Users/loganpeng/Dropbox/Fall_2018/LING672_Adv_semantic_interpret/final_project/data_parsers/AMR_AS_GRAPH_PREDICTION/parser/AMRProcessors.py", line 210, in parse_batch graphs,rel_triples = self.decoder.relProbAndConToGraph(concept_batches,rel_prob,roots,(dependent_mark_batch,aligns_raw),True,True) File "/Users/loganpeng/Dropbox/Fall_2018/LING672_Adv_semantic_interpret/final_project/data_parsers/AMR_AS_GRAPH_PREDICTION/parser/AMRProcessors.py", line 680, in relProbAndConToGraph root_id = root_id.data.tolist()[0] TypeError: 'int' object is not subscriptable

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ChunchuanLv/AMR_AS_GRAPH_PREDICTION/issues/6, or mute the thread https://github.com/notifications/unsubscribe-auth/ADs1bbIQb5tLiEjcoSO-sCnzCXnfdxdIks5u4Z7KgaJpZM4ZQsza .

logan-siyao-peng commented 5 years ago

Thank you Chunchuan. Problem solved.

TonalidadeHidrica commented 5 years ago

Hello,

I was also trying pretrained model and encountered the same problem. This is roughly what I did:

$ git clone https://github.com/ChunchuanLv/AMR_AS_GRAPH_PREDICTION
$ cd AMR_AS_GRAPH_PREDICTION
$ cp downloaded/gpus_0valid_best.pt .
$ conda create -n with-some-name
$ conda active with-some-name
$ conda install pytorch=0.2.0 -c pytorch
$ pip install some-missing-packages
$ conda list
# packages in environment at /path/to/my/envs/
#
# Name                    Version                   Build  Channel
# * snip *
pytorch                   0.2.0           py36cuda8.0cudnn6.0_0
# * snip *
$ python src/parse.py -train_from gpus_0valid_best.pt -text "This is a test."
Loading from checkpoint at gpus_0valid_best.pt
# * snip - some warnings *
Traceback (most recent call last):
  File "src/parse.py", line 101, in <module>
    output = Parser.parse_one(opt.text)
  File "/path/to/the/directory/parser/AMRProcessors.py", line 234, in parse_one
    return self.parse_batch([src_text])
  File "/path/to/the/directory/parser/AMRProcessors.py", line 215, in parse_batch
    graphs,rel_triples  =  self.decoder.relProbAndConToGraph(concept_batches,rel_prob,roots,(dependent_mark_batch,aligns_raw),True,True)
  File "/path/to/the/directory/parser/AMRProcessors.py", line 682, in relProbAndConToGraph
    root_id = root_id.data.tolist()[0]
TypeError: 'int' object is not subscriptable

Do you have any idea to solve this?

ChunchuanLv commented 5 years ago

looks like a version problem, but it probably can be solved by removing the indexing.

On Fri, 21 Jun 2019, 22:18 TonalidadeHidrica, notifications@github.com wrote:

Hello,

I was also trying pretrained model and encountered the same problem. This is roughly what I did:

$ git clone https://github.com/ChunchuanLv/AMR_AS_GRAPH_PREDICTION $ cd AMR_AS_GRAPH_PREDICTION $ cp downloaded/gpus_0valid_best.pt . $ conda create -n with-some-name $ conda active with-some-name $ conda install pytorch=0.2.0 -c pytorch $ pip install some-missing-packages $ conda list

packages in environment at /path/to/my/envs/

#

Name Version Build Channel

snip

pytorch 0.2.0 py36cuda8.0cudnn6.0_0

snip

$python src/parse.py -train_from gpus_0valid_best.pt -text "This is a test." Loading from checkpoint at gpus_0valid_best.pt

snip - some warnings

Traceback (most recent call last): File "src/parse.py", line 101, in output = Parser.parse_one(opt.text) File "/path/to/the/directory/parser/AMRProcessors.py", line 234, in parse_one return self.parse_batch([src_text]) File "/path/to/the/directory/parser/AMRProcessors.py", line 215, in parse_batch graphs,rel_triples = self.decoder.relProbAndConToGraph(concept_batches,rel_prob,roots,(dependent_mark_batch,aligns_raw),True,True) File "/path/to/the/directory/parser/AMRProcessors.py", line 682, in relProbAndConToGraph root_id = root_id.data.tolist()[0] TypeError: 'int' object is not subscriptable

Do you have any idea to solve this?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ChunchuanLv/AMR_AS_GRAPH_PREDICTION/issues/6?email_source=notifications&email_token=AA5TK3J6XU74D5BOT6IW57DP3VATFA5CNFSM4GKCZTNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYJTQSI#issuecomment-504576073, or mute the thread https://github.com/notifications/unsubscribe-auth/AA5TK3JZAZSPPM7GCENKEV3P3VATFANCNFSM4GKCZTNA .

TonalidadeHidrica commented 5 years ago

I assured above that the version of the pytorch was 0.2.0 - but anyway, by removing the index the error could be avoided. However, when I tried to process a sentence, the parser always outputs an empty graph. It could be either an another bug independent to this original problem or a problem because of the modification (removing [0]), but for now I'm writing the error message here - tell me if it should be another problem.

$ python src/parse.py -train_from gpus_0valid_best.pt -text "This is a test."
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.rnn.LSTM' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.sparse.Embedding' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.container.Sequential' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Softmax' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.LogSoftmax' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/directory/parser/DataIterator.py:241: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return PackedSequence(Variable(packed[0], volatile=self.volatile,requires_grad = False),packed[1])
/path/to/the/directory/parser/models/ConceptModel.py:65: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
  outputs, hidden_t = self.rnn(emb, hidden)
/path/to/the/directory/parser/models/ConceptModel.py:117: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  le_prob = self.sm(le_score)
/path/to/the/directory/parser/models/ConceptModel.py:118: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  cat_prob = self.sm(cat_score)
/path/to/the/directory/parser/models/ConceptModel.py:119: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  ner_prob = self.sm(ner_score)
/path/to/the/directory/parser/DataIterator.py:162: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  out_index.append(Variable(index_t,volatile=self.volatile,requires_grad = False))
/path/to/the/directory/parser/DataIterator.py:165: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  out = Variable(out,volatile=self.volatile,requires_grad = False)
/path/to/the/directory/parser/models/MultiPassRelModel.py:234: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
  Outputs = self.rnn(poster_emb, hidden)[0]
/path/to/the/directory/parser/models/MultiPassRelModel.py:76: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
  outputs = self.rnn(emb, hidden)[0]
/path/to/the/directory/parser/models/MultiPassRelModel.py:345: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  output.append(self.LogSoftmax(score))
/path/to/the/directory/parser/models/MultiPassRelModel.py:402: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  score = F.log_softmax(score.view(ls[-1]*ls[-1],self.n_rel)) # - score.exp().sum(2,keepdim=True).log().expand_as(score)
Loading from checkpoint at gpus_0valid_best.pt
from model in gpus:0  to gpu:0
Model loaded
AmrModel(
  (concept_decoder): ConceptIdentifier(
    (encoder): SentenceEncoder(
      (rnn): LSTM(548, 256, bidirectional=True)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
      (word_fix_lut): Embedding(35887, 300, padding_idx=0)
      (pos_lut): Embedding(46, 32, padding_idx=0)
      (ner_lut): Embedding(25, 16, padding_idx=0)
      (drop_emb): Dropout(p=0)
    )
    (generator): Concept_Classifier(
      (cat_score): Sequential(
        (0): Dropout(p=0)
        (1): Linear(in_features=512, out_features=32, bias=True)
      )
      (le_score): Sequential(
        (0): Dropout(p=0)
        (1): Linear(in_features=512, out_features=495, bias=False)
      )
      (ner_score): Sequential(
        (0): Dropout(p=0)
        (1): Linear(in_features=512, out_features=109, bias=True)
      )
      (sm): Softmax()
    )
  )
  (poserior_m): VariationalAlignmentModel(
    (posterior): Posterior(
      (transform): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=512, out_features=200, bias=False)
      )
      (sm): Softmax()
    )
    (amr_encoder): AmrEncoder(
      (rnn): LSTM(232, 100, dropout=0.2, bidirectional=True)
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
    )
    (snt_encoder): SentenceEncoder(
      (rnn): LSTM(548, 256, dropout=0.2, bidirectional=True)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
      (word_fix_lut): Embedding(35887, 300, padding_idx=0)
      (pos_lut): Embedding(46, 32, padding_idx=0)
      (ner_lut): Embedding(25, 16, padding_idx=0)
      (drop_emb): Dropout(p=0.2)
    )
  )
  (relModel): RelModel(
    (root_encoder): RootEncoder(
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
      (root): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=744, out_features=200, bias=True)
        (2): ReLU()
      )
    )
    (encoder): RelEncoder(
      (head): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=744, out_features=200, bias=True)
      )
      (dep): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=744, out_features=200, bias=True)
      )
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
    )
    (generator): RelCalssifierBiLinear(
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (bilinear): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=200, out_features=18400, bias=True)
      )
      (head_bias): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=200, out_features=92, bias=True)
      )
      (dep_bias): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=200, out_features=92, bias=True)
      )
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
    )
    (root): Linear(in_features=200, out_features=1, bias=True)
    (LogSoftmax): LogSoftmax()
  )
  (rel_encoder): RelSentenceEncoder(
    (rnn): LSTM(549, 256, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
    (lemma_lut): Embedding(32377, 200, padding_idx=0)
    (word_fix_lut): Embedding(35887, 300, padding_idx=0)
    (pos_lut): Embedding(46, 32, padding_idx=0)
    (ner_lut): Embedding(25, 16, padding_idx=0)
  )
  (root_encoder): RootSentenceEncoder(
    (rnn): LSTM(548, 256, batch_first=True, dropout=0.2, bidirectional=True)
    (lemma_lut): Embedding(32377, 200, padding_idx=0)
    (word_fix_lut): Embedding(35887, 300, padding_idx=0)
    (pos_lut): Embedding(46, 32, padding_idx=0)
    (ner_lut): Embedding(25, 16, padding_idx=0)
  )
)
training parameters: 64
# ::snt This is a test.
# ::tok This is a test .
# ::lemma this be a test .
# ::pos DT VBZ DT NN .
# ::ner O O O O O
# ::node    a0  amr-empty   0-1
(a0 / amr-empty)