sherjilozair / char-rnn-tensorflow

Multi-layer Recurrent Neural Networks (LSTM, RNN) for character-level language models in Python using Tensorflow
MIT License
2.64k stars 960 forks source link

NotImplementedError: Negative indices are currently unsupported #10

Closed davidbernat closed 8 years ago

davidbernat commented 8 years ago

Line 53 of model.py contains the code: self.final_state = states[-1]

This throws the following exception. Tensorflow does not support Tensors with negative indices. (At least in the publicly available version.) What is the workaround? So many thanks.

File "/Library/Python/2.7/site-packages/tensorflow/python/ops/array_ops.py", line 124, in _SliceHelper raise NotImplementedError("Negative indices are currently unsupported") NotImplementedError: Negative indices are currently unsupported Exception TypeError: TypeError("'NoneType' object is not callable",) in <function _remove at 0x101c9b488> ignored

davidbernat commented 8 years ago

The issue appears to be related to the call to seq2seq.rnn_decoder. This method returns a single state Tensor; not a list of Tensors, as the original code appears to suggest. The changes below compile and run, but do not perform well.

        #outputs, states = seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm')
        outputs, state = seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm')
        output = tf.reshape(tf.concat(1, outputs), [-1, args.rnn_size])
        self.logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
        self.probs = tf.nn.softmax(self.logits)
        loss = seq2seq.sequence_loss_by_example([self.logits],
                [tf.reshape(self.targets, [-1])],
                [tf.ones([args.batch_size * args.seq_length])],
                args.vocab_size)
        self.cost = tf.reduce_sum(loss) / args.batch_size / args.seq_length
        # self.final_state = states[-1]
        self.final_state = state
ghost commented 8 years ago

Thank you very much for that fix!:)

ghost commented 8 years ago

Howto fix "NotImplementedError: Negative indices are currently unsupported"

outputs, states = seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm')

    outputs, **state** = seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm')
    output = tf.reshape(tf.concat(1, outputs), [-1, args.rnn_size])
    self.logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
    self.probs = tf.nn.softmax(self.logits)
    loss = seq2seq.sequence_loss_by_example([self.logits],
            [tf.reshape(self.targets, [-1])],
            [tf.ones([args.batch_size \* args.seq_length])],
            args.vocab_size)
    self.cost = tf.reduce_sum(loss) / args.batch_size / args.seq_length
    # self.final_state = states[-1]
    self.final_state = **state**
sherjilozair commented 8 years ago

This is fixed now. Please re-open if not.