keras-team / keras

Deep Learning for humans
http://keras.io/
Apache License 2.0
61.98k stars 19.48k forks source link

Regarding Text Autoencoders in KERAS for topic modeling #9897

Closed maisamwasti closed 3 years ago

maisamwasti commented 6 years ago

Introduction: I have trained Autoencoders (vanilla & variational) in KERAS for MNIST images, and have observed how good the latent representation in the bottleneck layer looks for clustering them together.

Objective: I want to do the same for short texts. Tweets specifically! I want to cluster them together based on their semantics using pre-trained GloVe embeddings.

What I am planning to do is create a CNN encoder and a CNN decoder as a start, before moving on to LSTMs/GRUs.

Problem: What should be the correct loss? How do I implement it in Keras?

This is how my KERAS model looks like

INPUT_TWEET (Word indexes) >> EMBEDDING LAYER >> CNN_ENCODER >> BOTTLENECK >> CNN_DECODER >> OUTPUT_TWEET (Word indexes)

Layer (type)                 Output Shape              Param #   
-----------------------------------------------------------------
Input_Layer (InputLayer)     (None, 64)                0         
embedding_1 (Embedding)      (None, 64, 200)           3299400   
enc_DO_0_layer (Dropout)     (None, 64, 200)           0         
enc_C_1 (Conv1D)             (None, 64, 16)            9616      
enc_MP_1 (MaxPooling1D)      (None, 32, 16)            0         
enc_C_2 (Conv1D)             (None, 32, 8)             392       
enc_MP_2 (MaxPooling1D)      (None, 16, 8)             0         
enc_C_3 (Conv1D)             (None, 16, 8)             200       
enc_MP_3 (MaxPooling1D)      (None, 8, 8)              0         
***bottleneck (Flatten)***   (None, 64)                0         
reshape_2 (Reshape)          (None, 8, 8)              0         
dec_C_1 (Conv1D)             (None, 8, 8)              200       
dec_UpS_1 (UpSampling1D)     (None, 16, 8)             0         
dec_C_2 (Conv1D)             (None, 16, 8)             200       
dec_UpS_2 (UpSampling1D)     (None, 32, 8)             0         
dec_C_3 (Conv1D)             (None, 32, 16)            400       
dec_UpS_3 (UpSampling1D)     (None, 64, 16)            0         
conv1d_2 (Conv1D)            (None, 64, 200)           9800      
dense_2 (Dense)              (None, 64, 1)             201       
flatten_2 (Flatten)          (None, 64)                0         
-----------------------------------------------------------------

This is clearly wrong because it tries to minimize the MSE loss between the Input and the Output (word indexes), where I think it should do it in the embedding layers (embedding_1 and conv1d_2).

Now how do I do it? Does it make sense? Is there a way to do this in Keras? Please check my code below:

The code:

sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32',name="Input_Layer")
embedded_sequences = embedding_layer(sequence_input)
embedded_sequences1 = Dropout(0.5, name="enc_DO_0_layer")(embedded_sequences)

x = Conv1D(filters=16, kernel_size=3, activation='relu', padding='same',name="enc_C_1")(embedded_sequences1)
x = MaxPooling1D(pool_size=2, padding='same',name='enc_MP_1')(x)
x = Conv1D(filters=8, kernel_size=3, activation='relu', padding='same',name="enc_C_2")(x)
x = MaxPooling1D(pool_size=2, padding='same',name="enc_MP_2")(x)
x = Conv1D(filters=8, kernel_size=3, activation='relu', padding='same',name="enc_C_3")(x)
x = MaxPooling1D(pool_size=2, padding='same',name="enc_MP_3")(x)

encoded = Flatten(name="bottleneck")(x)
x = Reshape((8, 8))(encoded)

x = Conv1D(filters=8, kernel_size=3, activation='relu', padding='same',name="dec_C_1")(x)
x = UpSampling1D(2,name="dec_UpS_1")(x)
x = Conv1D(8, 3, activation='relu', padding='same',name="dec_C_2")(x)
x = UpSampling1D(2,name="dec_UpS_2")(x)
x = Conv1D(16, 3, activation='relu',padding='same',name="dec_C_3")(x)
x = UpSampling1D(2,name="dec_UpS_3")(x)
decoded = Conv1D(200, 3, activation='relu', padding='same')(x)
y = Dense(1)(decoded)
y = Flatten()(y)

autoencoder = Model(sequence_input, y)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')

autoencoder.fit(x = tweet_word_indexes ,y = tweet_word_indexes,
            epochs=10,
            batch_size=128,
            validation_split=0.2)

Dont want it to do this:

It is obviously just trying to reconstruct the array of word indexes (zero padded) because of the bad loss.

Input  = [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1641 13 2 309 932 1 10 5 6]  
Output = [ -0.31552997 -0.53272009 -0.60824025 -1.14802313 -1.14597917 -1.08642125 -1.10040164 -1.19442761 -1.19560885 -1.19008029 -1.19456315 -1.2288748 -1.22721946 -1.20107424 -1.20624077 -1.24017036 -1.24014354 -1.2400831 -1.24004364 -1.23963416 -1.23968709 -1.24039733 -1.24027216 -1.23946059 -1.23946059 -1.23946059 -1.23946059 -1.23946059 -1.23946059 -1.23946059 -1.23946059 -1.23946059 -1.23946059 -1.14516866 -1.20557368 -1.5288837 -1.48179781 -1.05906188 -1.17691648 -1.94568193 -1.85741842 -1.30418646 -0.83358657 -1.61638248 -1.17812908 0.53077424 0.79578459 -0.40937367 0.35088596 1.29912627 -5.49394751 -27.1003418 -1.06875408 33.78763962 109.41391754 242.43798828 251.05577087 300.13430786 267.90420532 178.17596436 132.06596375 60.63394928 82.10819244 91.18526459]

Question: Does it make sense to you? What should be the correct loss? How do I implement it in Keras?

cmertin commented 5 years ago

@maisamwasti did you figure out the solution to this?

For the embedding layer, you can set weights=[emb] where emb = CreateEmbedding() which will create the embedding, and by that I mean pull it out of the GloVe embeddings dictionary you have pre-loaded in memory.