gabrieleangeletti / Deep-Learning-TensorFlow

Ready to use implementations of various Deep Learning algorithms using TensorFlow.
http://blackecho.github.io
MIT License
967 stars 378 forks source link

Issues with the documentation #8

Closed Jakobovski closed 8 years ago

Jakobovski commented 8 years ago

I found a few issues with the project's documentations

  1. The neurons per layer numbers in the "stacked denoising autoencoder" seem to be wrong . Currently they are 784 <-> 1024, 1024 <-> 784, 784 <-> 512, 512 <-> 256 but I think they should be changed to 784 <-> 1024, 784 <-> 512, 512 <-> 256. It probably also makes sense to include the decoder layers.
  2. In the "stacked deep autoencoder" it says This command trains a Stack of Denoising Autoencoders. Denoising should probably be removed. The neurons per layer numbers described in the section contain both the encoder and decoder layers, but in the stacked denoising section only the encoder layers are described.

I am happy to contribute these changes if you would like.

gabrieleangeletti commented 8 years ago

Hi @Jakobovski.

For point 1, I didn't include the decoder layers because they are dropped after the pretraining procedure. Maybe we could write something like 784 <-> 1024 <-> 784, 1024 <-> 512 <-> 1024, 512 <-> 256 <-> 512 for the pretraining phase and 784 -> 1024 -> 512 -> 256 -> num_classes for the finetuning phase.

For point 2, both the encoder and decoder layers are described because they are both used at finetuning. "stacked denoising autoencoder" is for supervised learning, "stacked deep autoencoder" is for unsupervised learning. Denoising should probably be removed, I agree, is kind of confusing.

Any kind of contribution is really welcomed :smile: