Open QiXuanWang opened 7 years ago
Instead of tf.unpack use tf.unstack
Getting Error :
ValueError: Attempt to reuse RNNCell <tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.GRUCell object at 0x7efcdd676978> with a different variable scope than its first use. First use of cell was with scope 'rnn/attention_cell_wrapper/multi_rnn_cell/cell_0/gru_cell', this attempt is with scope 'rnn/attention_cell_wrapper/multi_rnn_cell/cell_1/gru_cell'. Please create a new instance of the cell if you would like it to use a different set of weights. If before you were using: MultiRNNCell([GRUCell(...)] * numlayers), change to: MultiRNNCell([GRUCell(...) for in range(num_layers)]). If before you were using the same cell instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances (one for forward, one for reverse). In May 2017, we will start transitioning this cell's behavior to use existing stored weights, if any, when it is called with scope=None (which can lead to silent model degradation, so this error will remain until then.)
To work with latest TF package, you'll need to make certain modification to the last part namely RNN codes. tf.nn.rnn_cell is no longer there and you'll need to use tf.contrib.rnn.*, function name is same. And, I'm trying to use "'state_is_tuple=True" since it's recommended now, but failed...
BTW, why the RNN learning loss is 1000+ at last? Does that mean anything?