farizrahman4u / recurrentshop

Framework for building complex recurrent neural networks with Keras
MIT License
767 stars 218 forks source link

'Tensor' object has no attribute '_keras_history' #87

Closed risajef closed 5 years ago

risajef commented 6 years ago

Hello I get the error: 'Tensor' object has no attribute '_keras_history' I don't know why. Here is the code.

from keras import backend as K
from keras.layers import Input
from keras.layers import LSTM
from keras.layers import Lambda
from keras.models import Model
import recurrentshop as R

def loss(x,output): return output[-1]

def step_function(x):
    # first input is the input from the problem (1 float for every timestep),
    # and the output from the LSTM (also 1 float for every timestep)
    # second are the two hidden states (each consists of 1 float)
    x_inp = x[0]
    state = x[1]
    x_t = x_inp[0]
    y_t = x_inp[1]
    S_t = state[0]
    M_t = state[1]

    # compute some stuff
    S_t1 = S_t + M_t * K.relu(y_t)*x_t
    M_t1 = (1 - K.relu(y_t))*M_t
    M_t1 += K.relu(-y_t)*x_t*S_t
    S_t1 =  S_t1 - K.relu(-y_t) * S_t1

    output = K.stack([S_t1, M_t1], axis=1)
    return output

batchsize = 10
epochs = 10
timestep = 1
input_dim = 1

inp = Input((1,1))
L1 = LSTM(128, return_sequences=True, batch_input_shape=(batchsize, timestep, input_dim))(inp)
L2 = LSTM(1, return_sequences=True, batch_input_shape=(batchsize, timestep, 128))(L1)

#create recurrent layer
inp_t = Input((2,))
state_t = Input((2,))

# Compute new hidden state
h_t = Lambda(step_function)([inp_t, state_t])
# Build the RNN
rnn = R.RecurrentModel(input=inp_t, initial_states=[state_t], output=h_t, final_states=[h_t])

#reshape for the lambda function
inp_re = K.reshape(inp, (-1,1))
L2 = K.reshape(L2, (-1,1))

inp_to_rnn = K.stack([L2, inp_re], axis=1)

out = rnn(inp_to_rnn)

model = Model(inputs=inp, outputs=out)

model.compile(loss=loss,
              optimizer='rmsprop')

#model.fit(x_train, x_train, batch_size=batchsize, epochs=epochs)

#score = model.evaluate(x_train, x_train, batch_size=batchsize)

I know that this is not the way to go but I really can't understand why this networks throws an error. If I run this as written I get: 'Tensor' object has no attribute '_keras_history'. What does this mean exactly and why do I have the error? I also don't know which variable is the problem. I think I'm reshaping things that should not be reshaped but I can't get it fit for the RecurrentModel.

Ghembs commented 6 years ago

Getting the same problem with:

input = Input((250, 5,))
decoder_input = Input(shape = (133,))
h_in = Input(shape = (512,))
c_in = Input(shape = (512,))
readout_in = Input(shape = (133,))
enc_1 = Bidirectional(LSTM(256))
enc_mean = Dense(128)
enc_log_sigma = Dense(128)
h_init = Dense(1024)
dec_1 = LSTM(512)
dec_2 = Dense(123)
dec_3 = LSTMCell(512)

dec_out, h, c = dec_3([decoder_input, h_in, c_in])

rnn = RecurrentModel(input = decoder_input,
                          initial_states = [h_in, c_in],
                          output = dec_out, final_states = [h, c],
                          readout_input = readout_in,
                          return_sequences = True)

a = enc_1(input)
mean = enc_mean(a)
log_sigma = enc_log_sigma(a)
z = Lambda(sampling)([mean, log_sigma])
_h_in = h_init(z)
_h_in = Reshape((512, 2,))(_h_in)

z_ = Reshape((1, 128,))(z)
z_out = z_
for i in range(249):
    z_out = concatenate([z_out, z_], axis = 1)

z_out = concatenate([z_out, input], axis = 2)

print(z_out.shape[:])

\# with the simple one in this comment it works, but I need readout in the model
\# out = dec_1(z_out, initial_state = [_h_in[:, :, 0], _h_in[:, :, 1]])
out = rnn(z_out, initial_state = [_h_in[:, :, 0], _h_in[:, :, 1]])
out = dec_2(out)

model = Model(input, out)

Have you solved the issue?

7kbird commented 6 years ago

write Lambda layer wrapper for any custom Keras backend operation

see https://github.com/keras-team/keras/issues/7362

ghost commented 6 years ago

I'm seeing this as well, and the solutions in keras-team/keras#7362 aren't working.

Erutan-pku commented 6 years ago

me 2 ! I warped a few tf action in a Lambda layer. maybe I split the batch dim using tf.split , split another dim with dynamic shape, deal it separately and concat it back? but I think that should be supported !

Erutan-pku commented 6 years ago

oh! I found the bug just now! if the Lambda is : Lambda(func_a)([x, y,z]), and func_a is defined as func_a(inputs), DO NOT ASSIGNMENT the inputs in func_a (e.g. DO NOT WRITE inputs[1]=......)

This is my bug, I hope it will be helpful for someone else. almost 2 hours for this.....

BruceDai003 commented 5 years ago

I encountered this same popular annoying problem today. And I debuged by printing the output after many output of the operations. Now, I found that as in my case, it is the very basic, easy-to-get-ignored '+' operation caused this problem, here is an example: Assuming x, y are two tensors with the same shape, You need to substitute z = x + y with

from keras.layers import add
z = add([x, y])

The same applies to -

snknitin commented 5 years ago

@BruceDai003 Thanks, That fixed my issue with Resnets. I was using + instead of Add()

ankitom commented 5 years ago

@BruceDai003 Thanks Bruce it worked. I have given thumbs up to your answer.

yongjinjiang commented 5 years ago

@BruceDai003 Hi, Bruce, that really works! I have given thumbs up to your answer. Thanks!

slickFix commented 5 years ago

@BruceDai003 Thanks man!! It works.