Closed trungnt13 closed 7 years ago
That's not how masking works. See the docs for the 'Masking' layer.
On 17 April 2016 at 17:20, TrungNT notifications@github.com wrote:
This is the model I used:
x = Input(shape=Xtrain.shape[1:], name='X') vad = Input(shape=Xtrain_vad.shape[1:], name='vad') l1 = LSTM(200)(x, mask=vad) l2 = Dense(len(label), activation='softmax')(l1)
model = Model(input=[x, vad], output=l2) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit([Xtrain, Xtrain_vad], ytrain, nb_epoch=1)
And the output:
but the provided input variable at index 1 is not part of the computational graph needed to compute the outputs: vad.
Everything ran fine, but the model didn't take mask into account. Probably issue come from call of Layers, this method totally ignore the mask:
# raise exceptions in case the input is not compatible # with the input_spec set at build time self.assert_input_compatibility(x) # build and connect layer input_added = False input_tensors = to_list(x) inbound_layers = [] node_indices = [] tensor_indices = [] for input_tensor in input_tensors: if hasattr(input_tensor, '_keras_history') and input_tensor._keras_history: # this is a Keras tensor previous_layer, node_index, tensor_index = input_tensor._keras_history inbound_layers.append(previous_layer) node_indices.append(node_index) tensor_indices.append(tensor_index) else: inbound_layers = None break if inbound_layers: # this will call layer.build() if necessary self.add_inbound_node(inbound_layers, node_indices, tensor_indices) input_added = True
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/fchollet/keras/issues/2374
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(timesteps, features)))
model.add(LSTM(32))
This masking mechanism is very inflexible.
In my case, I have 2 different masks:
The mask_value of normal mask in the input are all 0. However, the values of input timestep in VAD mask are still speech frame, and this masking mechanism cannot deal with this situation. Especially in the case that I only want to use VAD mask without Normal mask, then I set input value at VAD indices to 0, and the values at Normal mask indices to any thing different from 0 will introduce totally implausible input.
Hi @fchollet, is that possible to add some very simple examples with Masking for Functional API other than Sequential? It might be just 3 lines of codes.
Any update on this? I checked the the __call__
function of Layer
and found when using Functional API, masking like val
in l1 = LSTM(200)(x, mask=vad)
will be ignored if x
is a keras tensor, which will use the internal mask of x
instead. In other words, x
should be already masked before passing to LSTM
.
I still think one example on the usage of Masking for Functional API is necessary. Also, l1 = LSTM(200)(x, mask=vad)
is an more intuitive way to me than the current mechanism, although it is wrong for keras tensor.
This is the model I used:
And the output:
Everything ran fine, but the model didn't take mask into account. Probably issue come from call of Layers, this method totally ignore the mask: