Closed anxingle closed 7 years ago
Sorry for missing the example code. I now add an example code in README.md that I used before.
I'll left this issue open, if you have any question more, then comment it. If not, close this issue. Thank you!
@iwyoo Thanks a million!
x = tf.placeholder(tf.float32, [batch_size, height, width, nsteps, channel])
_Hsplit = tf.split(3, _nsteps, _X)
#shape: step_size *[None,height,width,channel]
_Hsplit = [tf.squeeze(p_input,[3]) for p_input in _Hsplit]
with tf.variable_scope(_name):
lstm_cell = convLSTM(dimhidden,forget_bias=1.0,\
state_is_tuple=True)
#lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell, input_keep_prob=0.5,\
# output_keep_prob=0.5)
state = lstm_cell.zero_state(batch_size,height,width)
#lstm_cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell]*num_layers,\
# state_is_tuple=True)
outputs = []
for input_ in _Hsplit:
output,state = lstm_cell(input_,state)
outputs.append(output)
result_out = []
state_h = lstm_cell.zero_state(batch_size,height,width)
for output_ in outputs:
out,state_h = lstm_cell(output_,state_h)
result_out.append(out)
#_LSTM_O, _LSTM_S = tf.nn.rnn(lstm_cell, _Hsplit, \
# initial_state=state)
return result_out
I cann't use "DropoutWrapper" and "MultiRNNCell" and "tf.nn.rnn". So I rewrite the same functions to replace the "tf.nn.rnn" . But I got that :
File "lstm_ctc_ori.py", line 84, in _RNN
output,state = lstmcell(input,state)
File "/mnt/d/workspace/ubuntu/experiment_recog/convLstm/ConvLSTMCell.py", line 70, in call
c, h = state
File "/mnt/d/workspace/ubuntu/tf/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 495, in iter
raise TypeError("'Tensor' object is not iterable.")
TypeError: 'Tensor' object is not iterable. I want to feed to the network pictures like this:
I have not checked that existing implementations are compatible with those provided by TensorFlow because they use different operators. I will try to implement it as soon as I can. The layers mentioned above (DropoutWrapper, MultiRNNCell, tf.nn.rnn) can be stacked directly as a block using ConvLSTMCell. If you are in a hurry, I suggest you implement it in such a way. (I've done similar experiments in the past.) When I read the contents of the error, it seems to be caused by the difference between the past version and the current version. (With is_state_stuple variable) I am going to implement it by referring to the current Tensorflow version implementation. But I can not confirm how long it will take.
Ok. I will try to implement it and your implementation of the ConvLSTM do give me a lot help. Thank you .
Hi @iwyoo , thanks for sharing your implementation! In the example provided in the README, it is written this:
t_output, state = cell(p_input_, state, k_size)
I think the k_size parameter, should be actually the scope name. Shouldn't it?
Oh, thanks for noticing me. The README file was the old version. Now, it is fixed.
I have some confusions about the code:
_conv(args, output_size, ....) which say shape of inputs is (batch_size x height x width x arg_size) , what's the meaning of "arg_size"?
How to define the input of COnvLSTMCell ? I know how to define a network of LSTM:
In the above code ,how to chang the X and input_split to feed to ConvLSTMCell ? (A simple demo which show us how to use would more grateful ! )