Closed Higgcz closed 8 years ago
Keras's implementation of dynamic rnn in tensorflow does not support sequences of rank > 3 for now.
That's the thing, I don't need sequences with rank > 3 for LSTM, but there is no other way how to connect CNN to LSTM. The input of LSTM is just sequence with length of one with 1D vector.
Resolved with #3835. Thanks!
After updating to the latest version of keras I've noticed weird thing.
I'm trying to use CNN with stateful LSTM, but I have a problem creating the model under TensorFlow backend. In Theano everything works just fine, but TensorFlow is significantly faster, so I would preffer the TensorFlow backend.
At first, there was problem in TimeDistributed wrapper which caused "No initial states" exception, which was fixed with the following commit: 4fb3f1b3f384c3a05306b37ea9a736144ed6394a, however it created another problem.
Input image has dimensions
16 x 64 x 3
, batch size is1
and time dimension is also1
.Now, If I create TimeDistributed convolutional layer with just input_shape:
It works fine, but obviously I cannot add LSTM layer, because it needs to know batch size as well. But when I create TD with batch_input_shape:
It doesn't works and TensorFlow raises error: "Shapes (?, 1, 15, 63, 32) and (1, 1, 32) are not compatible". (Full traceback)
I'm pretty sure this is caused on the line: tensorflow_backend.py:1243
K.rnn
function is called only if you define batch_input_shape, so it has to somewhere there.I guess it's not so relevant, but here is my fulll model
Keras version: 1.1.0 Tensorflow version: 0.10.0