shayneobrien / coreference-resolution

Efficient and clean PyTorch reimplementation of "End-to-end Neural Coreference Resolution" (Lee et al., EMNLP 2017).
https://arxiv.org/pdf/1707.07045.pdf
185 stars 61 forks source link

list index out of range in pad_sequence of torch implementation. #10

Open rupimanoj opened 5 years ago

rupimanoj commented 5 years ago

During evaluation stage on development dataset, I am facing below error intermittently. Have you ever faced this issue and how did you resolve it?

Traceback (most recent call last):
  File "coref.py", line 693, in <module>
    trainer.train(150)
  File "coref.py", line 459, in train
    self.train_epoch(epoch, *args, **kwargs)
  File "coref.py", line 490, in train_epoch
    corefs_found, total_corefs, corefs_chosen = self.train_doc(doc)
  File "coref.py", line 523, in train_doc
    spans, probs = self.model(document)
  File "/home/rupimanoj/anaconda3/envs/project/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "coref.py", line 424, in forward
    states, embeds = self.encoder(doc)
  File "/home/rupimanoj/anaconda3/envs/project/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "coref.py", line 206, in forward
    packed, reorder = pack(embeds)
  File "/home/rupimanoj/coref/coreference-resolution/src/utils.py", line 74, in pack
    packed = pack_sequence(sorted_tensors)
  File "/home/rupimanoj/anaconda3/envs/project/lib/python3.7/site-packages/torch/nn/utils/rnn.py", line 353, in pack_sequence
    return pack_padded_sequence(pad_sequence(sequences), [v.size(0) for v in sequences])
  File "/home/rupimanoj/anaconda3/envs/project/lib/python3.7/site-packages/torch/nn/utils/rnn.py", line 311, in pad_sequence
    max_size = sequences[0].size()
IndexError: list index out of range
txAnnie commented 5 years ago

have you fixed this problem? I got the same issue.

omkar13 commented 5 years ago

I am facing the same issue. I think the problem is that some documents are not parsed correctly and their sents property is left as an empty list.

TobiCa commented 5 years ago

Same problem here. Just following.

liubifly commented 5 years ago

I got that, too. I think it's because some embeddings are zeros. Can we directly skip those?

henryhust commented 5 years ago

it might be caused by the empty doc object just edit the code around line 480 in coref.py: add self.train_corpus = [doc for doc in self.train_corpus if doc.sents] before # Randomly sample documents from the train corpus batch = random.sample(self.train_corpus, self.steps) the same idea also works in the evaluation process

lizhuoranget commented 4 years ago

I am facing the similiar problem but in first evaluation. I had added self.train_corpus = [doc for doc in self.train_corpus if doc.sents] and finished 10 epoches training, then in first evaluate stage, the issue is as follows. Have you ever faced this issue and how did you resolve it?

EVALUATION

Evaluating on validation corpus...
31it [02:12,  1.26it/s]Traceback (most recent call last):
  File "coref.py", line 696, in <module>
    trainer.train(150)
  File "coref.py", line 467, in train
    results = self.evaluate(self.val_corpus)
  File "coref.py", line 572, in evaluate
    predicted_docs = [self.predict(doc) for doc in tqdm(val_corpus)]
  File "coref.py", line 572, in <listcomp>
    predicted_docs = [self.predict(doc) for doc in tqdm(val_corpus)]
  File "coref.py", line 601, in predict
    spans, probs = self.model(doc)
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "coref.py", line 423, in forward
    states, embeds = self.encoder(doc)
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "coref.py", line 205, in forward
    packed, reorder = pack(embeds)
  File "/home/LAB/lizr/coreference-resolution/src/utils.py", line 73, in pack
    packed = pack_sequence(sorted_tensors)
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/utils/rnn.py", line 353, in pack_sequence
    return pack_padded_sequence(pad_sequence(sequences), [v.size(0) for v in sequences])
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/utils/rnn.py", line 311, in pad_sequence
    max_size = sequences[0].size()
IndexError: list index out of range
lizhuoranget commented 4 years ago

I am facing the similiar problem but in first evaluation. I had added self.train_corpus = [doc for doc in self.train_corpus if doc.sents] and finished 10 epoches training, then in first evaluate stage, the issue is as follows. Have you ever faced this issue and how did you resolve it?

EVALUATION

Evaluating on validation corpus...
31it [02:12,  1.26it/s]Traceback (most recent call last):
  File "coref.py", line 696, in <module>
    trainer.train(150)
  File "coref.py", line 467, in train
    results = self.evaluate(self.val_corpus)
  File "coref.py", line 572, in evaluate
    predicted_docs = [self.predict(doc) for doc in tqdm(val_corpus)]
  File "coref.py", line 572, in <listcomp>
    predicted_docs = [self.predict(doc) for doc in tqdm(val_corpus)]
  File "coref.py", line 601, in predict
    spans, probs = self.model(doc)
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "coref.py", line 423, in forward
    states, embeds = self.encoder(doc)
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "coref.py", line 205, in forward
    packed, reorder = pack(embeds)
  File "/home/LAB/lizr/coreference-resolution/src/utils.py", line 73, in pack
    packed = pack_sequence(sorted_tensors)
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/utils/rnn.py", line 353, in pack_sequence
    return pack_padded_sequence(pad_sequence(sequences), [v.size(0) for v in sequences])
  File "/home/LAB/lizr/.conda/envs/lzrconda3.6/lib/python3.6/site-packages/torch/nn/utils/rnn.py", line 311, in pad_sequence
    max_size = sequences[0].size()
IndexError: list index out of range

it might be caused by the empty doc object just edit the code around line 480 in coref.py: add self.train_corpus = [doc for doc in self.train_corpus if doc.sents] before # Randomly sample documents from the train corpus batch = random.sample(self.train_corpus, self.steps) the same idea also works in the evaluation process

Well, liking henryhust's method. I add the line in coref.py , line 467 self.val_corpus.docs = [doc for doc in self.val_corpus if doc.sents] before results = self.evaluate(self.val_corpus) . I finished train and evaluation but my result is poor as follows: Epoch: 150 | Loss: 2832.548317 | Mention recall: 0.067340 | Coref recall: 0.024316 | Coref precision: 0.020000. Did you have a result like papers?And did you modify other line except #12 ?

gaoya-J commented 3 years ago

I have the same problem. Have you solved it? Could you discuss it?