Closed Jakobovski closed 8 years ago
Hi @Jakobovski.
For point 1, I didn't include the decoder layers because they are dropped after the pretraining procedure. Maybe we could write something like 784 <-> 1024 <-> 784, 1024 <-> 512 <-> 1024, 512 <-> 256 <-> 512
for the pretraining phase and 784 -> 1024 -> 512 -> 256 -> num_classes
for the finetuning phase.
For point 2, both the encoder and decoder layers are described because they are both used at finetuning. "stacked denoising autoencoder" is for supervised learning, "stacked deep autoencoder" is for unsupervised learning. Denoising should probably be removed, I agree, is kind of confusing.
Any kind of contribution is really welcomed :smile:
I found a few issues with the project's documentations
784 <-> 1024, 1024 <-> 784, 784 <-> 512, 512 <-> 256
but I think they should be changed to784 <-> 1024, 784 <-> 512, 512 <-> 256
. It probably also makes sense to include the decoder layers.This command trains a Stack of Denoising Autoencoders
. Denoising should probably be removed. The neurons per layer numbers described in the section contain both the encoder and decoder layers, but in the stacked denoising section only the encoder layers are described.I am happy to contribute these changes if you would like.