XifengGuo / DEC-keras

Keras implementation for Deep Embedding Clustering (DEC)
MIT License
477 stars 162 forks source link

SAE part may be wrong. #23

Closed Wang-Yu-Qing closed 3 years ago

Wang-Yu-Qing commented 3 years ago

The SAE (stacked autoencoder) part should be trained layer-wise, which means the next autoencoder starts to be trained after the previous one is trained. From origin paper:

After training of one layer, we use its output h as the input to train the n

However from the output of model structure image (autoencoders.png), the encoders are connected to each other and then follows a number of decoders and there is only one training phase over the whole "autoencoder".

XifengGuo commented 3 years ago

@Yu-Qing-Wang I didn't mean to re-implement the DEC by exactly following the settings in the paper.

smiler96 commented 3 years ago

The SAE (stacked autoencoder) part should be trained layer-wise, which means the next autoencoder starts to be trained after the previous one is trained. From origin paper:

After training of one layer, we use its output h as the input to train the n

However from the output of model structure image (autoencoders.png), the encoders are connected to each other and then follows a number of decoders and there is only one training phase over the whole "autoencoder".

does this setting really make a difference ?