My code is almost a copy paste of the attention model. Even though the original code/data works fine, when I tweak it a bit for my data, it doesn't.
While this code works with music notation, my data consists of very small images (5 by 5 pixels). And they already have values between 0 and 1.
My input has a shape of 257000, 240, 50 so my sequences are 240 long, and I am concatenating and flattening two 5x5 images to get 50 points (I know this is not the best strategy, but this is only the first try). The output is 257000, 25. So just one of the images. The idea is to input sequences of pair of images, and output the next image. This code works well, and produces nice results, when doing stacked LSTMs.
My code for attention, following the link before, is as follows:
def create_network(n_in, embed_size = 100, rnn_units = 256, use_attention = True):
""" create the structure of the neural network """
inputs = Input(shape = (n_in.shape[1],n_in.shape[2]))
# we will use a dense layer as embedding
x = Dense(embed_size, activation='relu')(inputs)
x = LSTM(rnn_units, return_sequences=True)(x)
if use_attention:
x = LSTM(rnn_units, return_sequences=True)(x)
e = Dense(1, activation='tanh')(x)
e = Reshape([-1])(e)
alpha = Activation('softmax')(e)
alpha_repeated = Permute([2, 1])(RepeatVector(rnn_units)(alpha))
c = Multiply()([x, alpha_repeated])
c = Lambda(lambda xin: K.sum(xin, axis=1), output_shape=(rnn_units,))(c)
else:
c = LSTM(rnn_units)(x)
bz_out = Dense(25, activation = 'relu', name = 'gen_oscs')(c)
model = Model(inputs, bz_out)
if use_attention:
att_model = Model(inputs, alpha)
else:
att_model = None
opti = RMSprop(lr = 0.001)
model.compile(loss='mae', optimizer=opti)
return model, att_model
When I run both these functions, using my dataset, and setting use_attention to False, so the NN is just stacked LSTMs, it works fine, and the loss value goes down. But when I set use_attention to True, and it does not learn anything, and the loss function does not go down not even in the first iterations.
I think the attention model somehow is destroying the data, but at the moment I have no idea how.
My code is almost a copy paste of the attention model. Even though the original code/data works fine, when I tweak it a bit for my data, it doesn't.
While this code works with music notation, my data consists of very small images (5 by 5 pixels). And they already have values between 0 and 1.
My input has a shape of 257000, 240, 50 so my sequences are 240 long, and I am concatenating and flattening two 5x5 images to get 50 points (I know this is not the best strategy, but this is only the first try). The output is 257000, 25. So just one of the images. The idea is to input sequences of pair of images, and output the next image. This code works well, and produces nice results, when doing stacked LSTMs.
My code for attention, following the link before, is as follows:
And my code to train the network:
When I run both these functions, using my dataset, and setting use_attention to False, so the NN is just stacked LSTMs, it works fine, and the loss value goes down. But when I set use_attention to True, and it does not learn anything, and the loss function does not go down not even in the first iterations.
I think the attention model somehow is destroying the data, but at the moment I have no idea how.