ChunchuanLv / AMR_AS_GRAPH_PREDICTION

53 stars 16 forks source link

Running with trained model outputs nothing #18

Open TonalidadeHidrica opened 5 years ago

TonalidadeHidrica commented 5 years ago

As I mentioned in #6 , I cannot obtain any AMR graph no matter what sentence I input. The following is what I did. I assured that the pytorch version was 0.2.0. Since an error occurred with the original code, I modified some code. How can I fix this?

$ git clone https://github.com/ChunchuanLv/AMR_AS_GRAPH_PREDICTION

$ cd AMR_AS_GRAPH_PREDICTION

$ cp downloaded/gpus_0valid_best.pt .

$ conda create -n with-some-name

$ conda active with-some-name

$ conda install pytorch=0.2.0 -c pytorch

$ pip install some-missing-packages

$ conda list
# packages in environment at /path/to/my/envs/
#
# Name                    Version                   Build  Channel
# * snip *
pytorch                   0.2.0           py36cuda8.0cudnn6.0_0
# * snip *

$ #modify parser/AMRProcessor.py so that
$ git diff parser/AMRProcessors.py
diff --git a/parser/AMRProcessors.py b/parser/AMRProcessors.py
index 575c6db..5ad2a3e 100644
--- a/parser/AMRProcessors.py
+++ b/parser/AMRProcessors.py
@@ -679,7 +679,7 @@ class AMRDecoder(object):
         for i,(role_scores,concepts,roots_score,dependent_mark,aligns) in enumerate(zip(score_batch,srl_batch,roots,depedent_mark_batch,aligns_batch)):
             root_s,root_id = roots_score.max(0)
             assert roots_score.size(0) == len(concepts),(concepts,roots_score)
-            root_id = root_id.data.tolist()[0]
+            root_id = root_id.data.tolist()#[0]
             assert root_id < len(concepts),(concepts,roots_score)

             g,quadruples = create_connected_graph(role_scores,concepts,root_id,dependent_mark,aligns)

$ python src/parse.py -train_from gpus_0valid_best.pt -text "This is a test."
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.rnn.LSTM' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.sparse.Embedding' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.container.Sequential' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Softmax' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/python/library/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.LogSoftmax' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/path/to/the/directory/parser/DataIterator.py:241: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return PackedSequence(Variable(packed[0], volatile=self.volatile,requires_grad = False),packed[1])
/path/to/the/directory/parser/models/ConceptModel.py:65: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
  outputs, hidden_t = self.rnn(emb, hidden)
/path/to/the/directory/parser/models/ConceptModel.py:117: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  le_prob = self.sm(le_score)
/path/to/the/directory/parser/models/ConceptModel.py:118: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  cat_prob = self.sm(cat_score)
/path/to/the/directory/parser/models/ConceptModel.py:119: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  ner_prob = self.sm(ner_score)
/path/to/the/directory/parser/DataIterator.py:162: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  out_index.append(Variable(index_t,volatile=self.volatile,requires_grad = False))
/path/to/the/directory/parser/DataIterator.py:165: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  out = Variable(out,volatile=self.volatile,requires_grad = False)
/path/to/the/directory/parser/models/MultiPassRelModel.py:234: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
  Outputs = self.rnn(poster_emb, hidden)[0]
/path/to/the/directory/parser/models/MultiPassRelModel.py:76: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
  outputs = self.rnn(emb, hidden)[0]
/path/to/the/directory/parser/models/MultiPassRelModel.py:345: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  output.append(self.LogSoftmax(score))
/path/to/the/directory/parser/models/MultiPassRelModel.py:402: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  score = F.log_softmax(score.view(ls[-1]*ls[-1],self.n_rel)) # - score.exp().sum(2,keepdim=True).log().expand_as(score)
Loading from checkpoint at gpus_0valid_best.pt
from model in gpus:0  to gpu:0
Model loaded
AmrModel(
  (concept_decoder): ConceptIdentifier(
    (encoder): SentenceEncoder(
      (rnn): LSTM(548, 256, bidirectional=True)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
      (word_fix_lut): Embedding(35887, 300, padding_idx=0)
      (pos_lut): Embedding(46, 32, padding_idx=0)
      (ner_lut): Embedding(25, 16, padding_idx=0)
      (drop_emb): Dropout(p=0)
    )
    (generator): Concept_Classifier(
      (cat_score): Sequential(
        (0): Dropout(p=0)
        (1): Linear(in_features=512, out_features=32, bias=True)
      )
      (le_score): Sequential(
        (0): Dropout(p=0)
        (1): Linear(in_features=512, out_features=495, bias=False)
      )
      (ner_score): Sequential(
        (0): Dropout(p=0)
        (1): Linear(in_features=512, out_features=109, bias=True)
      )
      (sm): Softmax()
    )
  )
  (poserior_m): VariationalAlignmentModel(
    (posterior): Posterior(
      (transform): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=512, out_features=200, bias=False)
      )
      (sm): Softmax()
    )
    (amr_encoder): AmrEncoder(
      (rnn): LSTM(232, 100, dropout=0.2, bidirectional=True)
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
    )
    (snt_encoder): SentenceEncoder(
      (rnn): LSTM(548, 256, dropout=0.2, bidirectional=True)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
      (word_fix_lut): Embedding(35887, 300, padding_idx=0)
      (pos_lut): Embedding(46, 32, padding_idx=0)
      (ner_lut): Embedding(25, 16, padding_idx=0)
      (drop_emb): Dropout(p=0.2)
    )
  )
  (relModel): RelModel(
    (root_encoder): RootEncoder(
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
      (root): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=744, out_features=200, bias=True)
        (2): ReLU()
      )
    )
    (encoder): RelEncoder(
      (head): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=744, out_features=200, bias=True)
      )
      (dep): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=744, out_features=200, bias=True)
      )
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
    )
    (generator): RelCalssifierBiLinear(
      (cat_lut): Embedding(32, 32, padding_idx=0)
      (bilinear): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=200, out_features=18400, bias=True)
      )
      (head_bias): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=200, out_features=92, bias=True)
      )
      (dep_bias): Sequential(
        (0): Dropout(p=0.2)
        (1): Linear(in_features=200, out_features=92, bias=True)
      )
      (lemma_lut): Embedding(32377, 200, padding_idx=0)
    )
    (root): Linear(in_features=200, out_features=1, bias=True)
    (LogSoftmax): LogSoftmax()
  )
  (rel_encoder): RelSentenceEncoder(
    (rnn): LSTM(549, 256, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
    (lemma_lut): Embedding(32377, 200, padding_idx=0)
    (word_fix_lut): Embedding(35887, 300, padding_idx=0)
    (pos_lut): Embedding(46, 32, padding_idx=0)
    (ner_lut): Embedding(25, 16, padding_idx=0)
  )
  (root_encoder): RootSentenceEncoder(
    (rnn): LSTM(548, 256, batch_first=True, dropout=0.2, bidirectional=True)
    (lemma_lut): Embedding(32377, 200, padding_idx=0)
    (word_fix_lut): Embedding(35887, 300, padding_idx=0)
    (pos_lut): Embedding(46, 32, padding_idx=0)
    (ner_lut): Embedding(25, 16, padding_idx=0)
  )
)
training parameters: 64
# ::snt This is a test.
# ::tok This is a test .
# ::lemma this be a test .
# ::pos DT VBZ DT NN .
# ::ner O O O O O
# ::node    a0  amr-empty   0-1
(a0 / amr-empty)
Adamits commented 4 years ago

@TonalidadeHidrica I would guess that you are not actually using pytorch 0.2.0, I do not think those warnings should be there - are you sure that whichever python you are calling has access to those conda installed packages?

Can you do python

import torch
print(torch.__version__)

I was getting a similar looking result when using pytorch 1.0.0. Of course in my case I seem to only be able to run pytorch 0.2 on CPU, because of GPU compatibility with old CUDA versions.