Closed MaratZakirov closed 7 years ago
tweet_a = Input(shape=(140, ), dtype='int32')
tweet_b = Input(shape=(140, ), dtype='int32')
emb = Embedding(input_dim=100, output_dim=10, input_length=20)
Your inputs have length 140, but your Embedding layer says the input length is 20. That doesn't make sense. Drop the input_length=
argument on your Embedding layer and it will work.
@mbollmann Good point thanks. Currently I found that on keras 1.2.1
# define shared embedding
embed = Embedding(input_dim=vocab_len, output_dim=vec_size, input_length=seq_size)
# define shared lstm
title_lstm = LSTM(rnn_size)
query_lstm = title_lstm
# define positive rnn convolution
query = Sequential()
query.add(Masking(mask_value=0.0, input_shape=(seq_size, )))
query.add(embed)
query.add(query_lstm)
# define positive title
title_p = Sequential()
title_p.add(Masking(mask_value=0.0, input_shape=(seq_size, )))
title_p.add(embed)
title_p.add(title_lstm)
# define final concatenation
model = Sequential()
model.add(Merge([query, title_p], mode='concat'))
Works but I agree that Masking layer is redundant but without it compilation fails. And I also found that using of Embedding layer produces much over-fitting, possible because of huge number of free parameters.
Suppose we have simple example (taken from keras documentation) with one shared LSTM how could I introduce one shared Embedding here? Sorry If my question is already answered but I read this topics and found them a bit confusing. I tried to simply make one Embedding and put it in different sequential models but model.fit failed with assert "You are only have one one input but give two". Then I took this example below and tried to put shared Embedding.
This is original
This refactored with shared embeding
But got:
What I am doing wrong?