Closed gahu1125 closed 5 years ago
@gahu1125 It really depends on your use case. If you don't know your nsteps parameter in advance, you can set it None when defining input or output placeholder and use the dynamic one but if you know your max_nstetps in advance and have sufficient gpu memory you can even use the static verison.
I'm posting an snippet from my code for your reference:
def multi_layer_birnn_static(config, input, seq_len, dropout):
nhidden = config.nb_hidden
ntags = config.out_dim
nsteps = config.nb_steps
nlayers = config.nb_layers
cell = rnn_cell(config.cell_type)
# input shape: (batch_size, nsteps, in_dim)
# Unstack to get a list of 'n_steps' tensors of shape (batch_size, n_input)
input = tf.unstack(input, nsteps, 1)
def _single_cell():
_cell = cell(num_units=nhidden, state_is_tuple=True)
_cell = tf.contrib.rnn.DropoutWrapper(_cell, output_keep_prob=dropout)
return _cell
fw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)
bw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)
output, _, _ = tf.contrib.rnn.static_bidirectional_rnn(fw_cell, bw_cell, input ,dtype=tf.float32)
output = tf.stack(output, 1)
return output
def multi_layer_birnn_dynamic(config, input, seq_len, dropout):
nhidden = config.nb_hidden
ntags = config.out_dim
nsteps = config.nb_steps
nlayers = config.nb_layers
cell = rnn_cell(config.cell_type)
# permute n_steps and batch_size
input = tf.transpose(input, [1, 0, 2])
def _single_cell():
_cell = cell(num_units=nhidden, state_is_tuple=True)
_cell = tf.contrib.rnn.DropoutWrapper(_cell, output_keep_prob=dropout)
return _cell
fw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)
bw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)
outputs, states = tf.nn.bidirectional_dynamic_rnn(
cell_fw=fw_cell,
cell_bw=bw_cell,
dtype=tf.float32,
inputs=input,
time_major=True,
sequence_length=seq_len)
out_fw, out_bw = outputs
output = tf.concat([out_fw, out_bw], axis=-1)
output = tf.transpose(output, [1, 0 ,2])
return output
had you resolve this error。can you help give us the method。i replace tf.nn.bidirectional_rnn to tf.nn.bidirectional_dynamic_rnn,it still donot work。
https://github.com/swethmandava/text_normalization/blob/master/blstm_new.py May be this can help
@gahu1125 Were you able to resolve the issue? I am also facing the same issue.
@lin520chong Did you solve this issue ?
maybe you can get solution from this https://github.com/KeithYin/mycodes/blob/master/tensorflow-piece/diy-multi-layer-bi-rnn.py
for tensorflow version 1.10.1 this issue still exists.
Updated code here with bidirectional_dynamic_rnn and using TF 1.4
https://github.com/gopi1410/ner-lstm
I'm using Tensorflow 1.0 and I got an error message as below while running model.py.
I looked up this problem in StackOverflow, and users told me that:
Any idea which bidirectional RNN should I change to?