farizrahman4u / recurrentshop

Framework for building complex recurrent neural networks with Keras
MIT License
767 stars 218 forks source link

Using return_sequences in recurrentshop #97

Open ghost opened 6 years ago

ghost commented 6 years ago

I'm trying to write a custom recurrent layer using recurrentshop. The input is a sequence of length "timesteps" and the output should be the sequence of all timesteps outputs, hence I'm using return_sequences=True.

The recurrent model has two inputs, one of which is recurrent from the last time step. I'm using this layer similar to this reccurentshop example, but I keep getting errors saying "You must feed a value for placeholder tensor 'input_2' with dtype float". What am I doing wrong? The full code follows:


import keras.backend as K
from recurrentshop import RecurrentModel
import numpy as np
from keras.models import Model
from keras.layers import Dense, Reshape, Conv1D, Input, Lambda, concatenate
from keras.optimizers import Adam

# parameters:
timesteps = 35
output_dim = 315
input_dim = 10
batch_size = 100

# recurrent layer definition:
def myRNN(input_dim,output_dim):
    inp = Input((input_dim,))
    h_tm1 = Input((output_dim,))
    modified_h = Lambda(lambda x: x * K.sum(K.square(inp)))(h_tm1)
    modified_inp = Dense(output_dim, use_bias=False, activation='tanh')(inp)
    modified_inp = Reshape((output_dim,1))(modified_inp)
    modified_inp = Conv1D(128, 7, padding='same', activation='tanh', use_bias=False)(modified_inp)
    modified_inp = Lambda(lambda x: K.sum(x, axis=-1))(modified_inp)
    hid = concatenate([modified_h, modified_inp], axis=-1)
    h_t = Dense(output_dim, use_bias=False, activation='tanh')(hid)  
    return RecurrentModel(input=inp, output=h_t, initial_states=h_tm1, final_states=h_t,
                          return_sequences=True, state_initializer=['zeros'])

# building the model:
inp = Input((timesteps, input_dim))
temp = myRNN(input_dim,output_dim)(inp)
out = Reshape((timesteps*output_dim,1))(temp)
model = Model(inputs=inp, outputs=out)
model.compile(loss='mse', optimizer='adam')

# testing the model:
inp = np.random.rand(batch_size ,timesteps ,input_dim)
prediction = model.predict(inp)

Thank you very much!