Open Imorton-zd opened 8 years ago
does the paper recomends training the entire model all at once? did you try layer wise pre-training?
@EderSantana The paper doesn't mention if the entire model is trained at once. Would you explain how to do layer wise pre-training simply? I am sorry that I have never tried it. Thanks!
it is called greedy-layer wise pretraining. its a little bit of a pain to do, but you can find tutorials online. essentially you trian an autoencoder for each layer at time
better than that, I just noticed that keras has an implementation of variational autoencoders in the examples folder. I think you should try that first. that one you don't need layer wise pretraining as much https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py
Hi, I hope I'm not bothering you. Recently, I have implemented a simple autoencoder with keras for text classification to do domain adaptation, but it performs worse than the original representation of the document, about 10% lower. As the paper shown, stacted denoising autoencoder could improve the performance for domain adaptation. Would you help me check the errors and give me some suggestions? Thanks!