Open ehfo0 opened 7 years ago
Hi @ehfo0 : Thanks for watching and give some feedback. Yes, your are right, the number of layers should be equal in encoder and decoder. The reason why I use differen't number is my computer have only two core i5 CPU, so I replace last layer of encoder with variational layer to speed up my process.
I don't know a good way to modify the model, I just try several different activate function and number of layer, see the output picture and loss funtion to varify the model. You can see many script use relu for all activate function on MNIST. Although I haven't finish this code, in this data set I found sigmoid performs better.
on VAE : why did u use 2 layer of encoder and 3 layers of decoder?
shouldn't they be equal? and how can we get a better result?