Closed davidbernat closed 8 years ago
The issue appears to be related to the call to seq2seq.rnn_decoder. This method returns a single state Tensor; not a list of Tensors, as the original code appears to suggest. The changes below compile and run, but do not perform well.
#outputs, states = seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm')
outputs, state = seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, args.rnn_size])
self.logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
self.probs = tf.nn.softmax(self.logits)
loss = seq2seq.sequence_loss_by_example([self.logits],
[tf.reshape(self.targets, [-1])],
[tf.ones([args.batch_size * args.seq_length])],
args.vocab_size)
self.cost = tf.reduce_sum(loss) / args.batch_size / args.seq_length
# self.final_state = states[-1]
self.final_state = state
Thank you very much for that fix!:)
outputs, **state** = seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, args.rnn_size])
self.logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
self.probs = tf.nn.softmax(self.logits)
loss = seq2seq.sequence_loss_by_example([self.logits],
[tf.reshape(self.targets, [-1])],
[tf.ones([args.batch_size \* args.seq_length])],
args.vocab_size)
self.cost = tf.reduce_sum(loss) / args.batch_size / args.seq_length
# self.final_state = states[-1]
self.final_state = **state**
This is fixed now. Please re-open if not.
Line 53 of model.py contains the code:
self.final_state = states[-1]
This throws the following exception. Tensorflow does not support Tensors with negative indices. (At least in the publicly available version.) What is the workaround? So many thanks.