Kaixhin / Autoencoders

Torch implementations of various types of autoencoders
MIT License
467 stars 77 forks source link

Different training and testing conditionals in Seq2SeqAE #6

Closed mohitsharma0690 closed 7 years ago

mohitsharma0690 commented 7 years ago

In the LSTM autoencder that you have the training and testing inputs are different. That is during training for the decoder you try to predict p(x[t+1] | x[<=t]) but in testing you rather condition it on the output of previous timesteps i.e. p(x[t+1] | y[<=t]).

This seems a bit off to me. Is it expected to be like this? Is there some reference somewhere for doing it this way?

mohitsharma0690 commented 7 years ago

Ahh I see why is it done this way!!