monikkinom / ner-lstm

Named Entity Recognition using multilayered bidirectional LSTM
539 stars 183 forks source link

'bidirectional_rnn' question #18

Closed gahu1125 closed 5 years ago

gahu1125 commented 7 years ago

I'm using Tensorflow 1.0 and I got an error message as below while running model.py.

AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'bidirectional_rnn'

I looked up this problem in StackOverflow, and users told me that:

In TensorFlow 1.0, you have the choice of two bidirectional RNN functions: tf.nn.bidirectional_dynamic_rnn() tf.contrib.rnn.static_bidirectional_rnn()

Any idea which bidirectional RNN should I change to?

utkrist commented 7 years ago

@gahu1125 It really depends on your use case. If you don't know your nsteps parameter in advance, you can set it None when defining input or output placeholder and use the dynamic one but if you know your max_nstetps in advance and have sufficient gpu memory you can even use the static verison.

I'm posting an snippet from my code for your reference:

def multi_layer_birnn_static(config, input, seq_len, dropout):
    nhidden = config.nb_hidden
    ntags   = config.out_dim
    nsteps  = config.nb_steps
    nlayers = config.nb_layers
    cell    = rnn_cell(config.cell_type)

    # input shape: (batch_size, nsteps, in_dim)
    # Unstack to get a list of 'n_steps' tensors of shape (batch_size, n_input)
    input = tf.unstack(input, nsteps, 1)

    def _single_cell():
        _cell = cell(num_units=nhidden, state_is_tuple=True)
        _cell = tf.contrib.rnn.DropoutWrapper(_cell, output_keep_prob=dropout)
        return _cell

    fw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)
    bw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)

    output, _, _ = tf.contrib.rnn.static_bidirectional_rnn(fw_cell, bw_cell, input ,dtype=tf.float32)
    output = tf.stack(output, 1)
    return output

def multi_layer_birnn_dynamic(config, input, seq_len, dropout):
    nhidden = config.nb_hidden
    ntags   = config.out_dim
    nsteps  = config.nb_steps
    nlayers = config.nb_layers
    cell    = rnn_cell(config.cell_type)

   # permute n_steps and batch_size
   input = tf.transpose(input, [1, 0, 2]) 

    def _single_cell():
        _cell = cell(num_units=nhidden, state_is_tuple=True)
        _cell = tf.contrib.rnn.DropoutWrapper(_cell, output_keep_prob=dropout)
        return _cell

    fw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)
    bw_cell = tf.contrib.rnn.MultiRNNCell(cells=[_single_cell() for _ in range(nlayers)], state_is_tuple = True)

    outputs, states = tf.nn.bidirectional_dynamic_rnn(
        cell_fw=fw_cell, 
        cell_bw=bw_cell,
        dtype=tf.float32,
        inputs=input,
        time_major=True, 
        sequence_length=seq_len)
    out_fw, out_bw = outputs
    output = tf.concat([out_fw, out_bw], axis=-1)
    output = tf.transpose(output, [1, 0 ,2])
    return output
lin520chong commented 6 years ago

had you resolve this error。can you help give us the method。i replace tf.nn.bidirectional_rnn to tf.nn.bidirectional_dynamic_rnn,it still donot work。

swethmandava commented 6 years ago

https://github.com/swethmandava/text_normalization/blob/master/blstm_new.py May be this can help

nanobyte-dg commented 6 years ago

@gahu1125 Were you able to resolve the issue? I am also facing the same issue.

aksstar commented 6 years ago

@lin520chong Did you solve this issue ?

Chiang97912 commented 6 years ago

maybe you can get solution from this https://github.com/KeithYin/mycodes/blob/master/tensorflow-piece/diy-multi-layer-bi-rnn.py

sarvy26 commented 5 years ago

for tensorflow version 1.10.1 this issue still exists.

gopi1410 commented 5 years ago

Updated code here with bidirectional_dynamic_rnn and using TF 1.4
https://github.com/gopi1410/ner-lstm