Open TrentBrick opened 5 years ago
I think I have solved it. You create a Lambda layer that takes in the input layer, finds its dimension and returns this as an int which is passed to output_length=
def get_length(args):
input_layer = args
return tf.shape(input_layer)[1]
seq_length_per_batch = Lambda(get_length,output_shape=(None))(inputs)
# Put this into the output_length parameter
Nevermind, this does not work. When I do it I get the error:
TypeError: Using a
tf.Tensor
as a Pythonbool
is not allowed. Useif t is not None:
instead ofif t:
to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
So I have to provide an integer value...
There must be some edit to the code so that my above solution works?
I am trying to make an autoencoder that uses variable length inputs (batched together). I want to have
decoder=True
so that the decoding portion has the latent space as an input at every timestep. However, when I makedecoder=True
I need to provide anoutput_length= <int>
. How can I make it so that it takes a dynamic length?