hengluchang / deep-news-summarization

News summarization using sequence to sequence model with attention in TensorFlow.
MIT License
184 stars 61 forks source link

list index out of range: `rev_dec_vocab[output] for output in outputs` #1

Closed gaurav22verma closed 6 years ago

gaurav22verma commented 6 years ago

While testing, I am encountering the following error:

Traceback (most recent call last):
  File "execute.py", line 294, in <module>
    decode()
  File "execute.py", line 227, in decode
    predicted_headline.write(" ".join([tf.compat.as_str(rev_dec_vocab[output]) for output in outputs])+'\n')
IndexError: list index out of range

My outputs list looks like this:

[38156, 38156, 38156, 38156, 14453, 14453, 8254, 25504, 25504, 27218, 25504, 8254, 8254, 8254, 8254, 8254, 8254, 27218, 10802, 27218]

And this the length of rev_dec_vocab is: 12695

This explains the error, but can you explain why am I facing this error? What do these variables signify? Also, the outputs list contains repeated elements. Is it okay or is there something wrong? Thanks!

PR-Iyyer commented 6 years ago

same error for me too.. :( @TheGalileo is it the same dataset provided here is giving the length 12695?

gaurav22verma commented 6 years ago

@PR-Iyyer No, I am using something else. I believe this error is due to vocabulary mismatch between test and train data. Haven't looked into in much yet, but will resolve it soon.

PR-Iyyer commented 6 years ago

ok thanks.

PR-Iyyer commented 6 years ago

can you share me more about your training data ? Especially the size? format etc?

On Wed, Oct 25, 2017 at 6:59 PM, Gaurav Verma notifications@github.com wrote:

@PR-Iyyer https://github.com/pr-iyyer No, I am using something else. I believe this error is due to vocabulary mismatch between test and train data. Haven't looked into in much yet, but will resolve it soon.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hengluchang/deep-news-summarization/issues/1#issuecomment-339330061, or mute the thread https://github.com/notifications/unsubscribe-auth/AfD54sZDy8my-ROSdD_3jInoWfwRP0-iks5svzfTgaJpZM4QBg5b .

-- Regards, Praveena.R

PR-Iyyer commented 6 years ago

Hi.. I got it fixed. On testing, I used the pretrained model from checkpoint in working_directory.

On Wed, Oct 25, 2017 at 6:59 PM, Gaurav Verma notifications@github.com wrote:

@PR-Iyyer https://github.com/pr-iyyer No, I am using something else. I believe this error is due to vocabulary mismatch between test and train data. Haven't looked into in much yet, but will resolve it soon.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hengluchang/deep-news-summarization/issues/1#issuecomment-339330061, or mute the thread https://github.com/notifications/unsubscribe-auth/AfD54sZDy8my-ROSdD_3jInoWfwRP0-iks5svzfTgaJpZM4QBg5b .

-- Regards, Praveena.R

gaurav22verma commented 6 years ago

@PR-Iyyer I am sorry, I don't understand. Can you please elaborate? What changes did you make in decode() to use the pretrained model?

PR-Iyyer commented 6 years ago

@TheGalileo : On testing, It was not using the pretrained model initially. So I explicitly edited seq2seq.ini by giving path for the model using the corresponding name in checkpoint file present in [working_dir] directory.

gaurav22verma commented 6 years ago

@PR-Iyyer : That's what I have been trying to do too. Can you share your seq2seq.ini with me here?

PR-Iyyer commented 6 years ago

Did you try giving full path?

gaurav22verma commented 6 years ago

@PR-Iyyer Yes, I did. Do you have your code on GitHub? Somewhere where I can have a quick look at your seq2seq.ini?

PR-Iyyer commented 6 years ago

[strings]

Mode : train, test, interactive

mode = interactive pretrained_model=/data/praveena/Newfinal/deep-news-summarization/working_dir/seq2seq.ckpt-357000

Specify the training, evaluation and testing encode and decode dataset

path train_enc = dataset/train_enc.txt train_dec = dataset/train_dec.txt eval_enc = dataset/eval_enc.txt eval_dec = dataset/eval_dec.txt test_enc = dataset/test_enc.txt test_dec = dataset/test_dec.txt

folder where checkpoints and vocabulary will be stored

working_directory = working_dir/

path to store predicted output

output = output/predicted_test_headline.txt

[ints]

vocabulary size

typical options: 40000, 60000, 80000. The results showed in the repo use

a vocab size of 80000. enc_vocab_size = 40000 dec_vocab_size = 40000

number of LSTM layers : 1/2/3. The results showed in the repo use 3

layers. num_layers = 1

typical options : 128, 256, 512, 1024. The results showed in the repo use

512 hidden units. hidden_units = 128

dataset size limit; typically none : no limit

max_train_data_size = 0

Control batch size to decide when to update weights

batch_size = 128

steps per checkpoint

Note : At a checkpoint, models parameters are saved, model is evaluated

and results are printed

steps_per_checkpoint = 100

[floats] learning_rate = 0.5 learning_rate_decay_factor = 0.99 max_gradient_norm = 5.0 ##############################################################################

Note : Edit the bucket sizes at line47 of execute.py (_buckets)

#

Learn more about the configurations from this link

https://www.tensorflow.org/versions/r0.9/tutorials/seq2seq/index.html

##############################################################################

On Thu, Oct 26, 2017 at 3:56 PM, Gaurav Verma notifications@github.com wrote:

@PR-Iyyer https://github.com/pr-iyyer : That's what I have been trying to do too. Can you share your seq2seq.ini with me here?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hengluchang/deep-news-summarization/issues/1#issuecomment-339622376, or mute the thread https://github.com/notifications/unsubscribe-auth/AfD54ik2Uik4qzWazfpIYzuMMQALeq45ks5swF5egaJpZM4QBg5b .

-- Regards, Praveena.R

gaurav22verma commented 6 years ago

This is what I am using in my seq2seq.ini: pretrained_model = working_directory/seq2seq.ckpt-23500

And I am ending up with the following error:

Unsuccessful TensorSliceReader constructor: Failed to get matching files on working_directory/seq2seq.ckpt-23500: Not found: working_directory
     [[Node: save/RestoreV2_25 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_25/tensor_names, save/RestoreV2_25/shape_and_slices)]]

Shouldn't this work, as rest of the paths are relative too?

PR-Iyyer commented 6 years ago

at times, it wont work when the complete path is not given. Error says its still unable to find working_dir . I strongly recommend you to provide full path for seq2seq.ckpt-XXXX like have given in the code. Just try once and see. I feel it should work.

gaurav22verma commented 6 years ago

Yeah, it works. I had written pretrained_model = working_directory/seq2seq.ckpt-xxxxx instead of pretrained_model = working_dir/seq2seq.ckpt-xxxxx in my seq2seq.ini. Thanks! :)