Closed Wang-Yu-Qing closed 3 years ago
@Yu-Qing-Wang I didn't mean to re-implement the DEC by exactly following the settings in the paper.
The SAE (stacked autoencoder) part should be trained layer-wise, which means the next autoencoder starts to be trained after the previous one is trained. From origin paper:
After training of one layer, we use its output h as the input to train the n
However from the output of model structure image (autoencoders.png), the encoders are connected to each other and then follows a number of decoders and there is only one training phase over the whole "autoencoder".
does this setting really make a difference ?
The SAE (stacked autoencoder) part should be trained layer-wise, which means the next autoencoder starts to be trained after the previous one is trained. From origin paper:
However from the output of model structure image (autoencoders.png), the encoders are connected to each other and then follows a number of decoders and there is only one training phase over the whole "autoencoder".