Closed korosig closed 3 years ago
Is help still needed? I'll try to respond within a week
The problem appears to be the Lambda
layer, not Embedding
; lambda x: x[:, :, 0]
seems non-differentiable, as per the more informative error with tf.compat.v1.disable_eager_execution()
:
ValueError: Variable Tensor("lambda/strided_slice:0", shape=(16, 100), dtype=float32) has `None` for gradient.
Please make sure that all of your ops have a gradient defined (i.e. are differentiable).
Try doing the slicing differently (e.g. tf.slice
), or avoid it, or simply don't try to get_gradients
on layers 1 and 2 (Lambdas); below works:
def make_model(rnn_layer, batch_shape, units):
ipt = Input(batch_shape=batch_shape)
emb = Embedding(100, 30, input_length=100, mask_zero=True)(ipt)
x = rnn_layer(units, activation='tanh', return_sequences=True)(emb)
out = rnn_layer(units, activation='tanh', return_sequences=False)(x)
model = Model(ipt, out)
model.compile(Adam(4e-3), 'mse')
return model
# unchanged
units = 6
batch_shape = (16, 100)#, 2*units)
model = make_model(LSTM, batch_shape, units)
train_model(model, 30, batch_shape)
x, y = make_data(batch_shape)
grads_all = get_gradients(model, 1, x, y) # return_sequences=True, layer index 1
grads_last = get_gradients(model, 2, x, y) # return_sequences=False, layer index 2
Hi there,
I like your GitRepo, and I have started to use it on my project. I have a problem with the Embedding layer. If the first layer is the Embedding layer, the "see-rnn -> get_gradients" broke down. Any suggestion?
I have tried to use embedding with your example, like this (I don't care about the results, only to use embedding layer):
I got this error:
AttributeError: Tensor.name is meaningless when eager execution is enabled.
My model is the next: