farizrahman4u / recurrentshop

Framework for building complex recurrent neural networks with Keras
MIT License
767 stars 218 forks source link

Decoder variable output_length? #113

Open TrentBrick opened 5 years ago

TrentBrick commented 5 years ago

I am trying to make an autoencoder that uses variable length inputs (batched together). I want to have decoder=True so that the decoding portion has the latent space as an input at every timestep. However, when I make decoder=True I need to provide an output_length= <int>. How can I make it so that it takes a dynamic length?

TrentBrick commented 5 years ago

I think I have solved it. You create a Lambda layer that takes in the input layer, finds its dimension and returns this as an int which is passed to output_length=

def get_length(args):
        input_layer = args
        return tf.shape(input_layer)[1]

seq_length_per_batch = Lambda(get_length,output_shape=(None))(inputs)
# Put this into the output_length parameter
TrentBrick commented 5 years ago

Nevermind, this does not work. When I do it I get the error:

TypeError: Using a tf.Tensor as a Python bool is not allowed. Use if t is not None: instead of if t: to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.

So I have to provide an integer value...

There must be some edit to the code so that my above solution works?