vineetjohn / linguistic-style-transfer

Neural network parametrized objective to disentangle and transfer style and content in text
Apache License 2.0
138 stars 33 forks source link

Decoder - Conditional generation on previous word #2

Closed vineetjohn closed 6 years ago

vineetjohn commented 6 years ago

The decoder RNN output at each time-step should be conditioned not only on the rnn_output but also the prediction of the previous step.

The ground truth shouldn't be fed, because at inference time, we expect the decoder to output words that are different from the source sentences.

vineetjohn commented 6 years ago

During training, the decoder should rely on the ground truth of the previous step, and during generation, it should rely on the previous prediction.

https://stackoverflow.com/questions/44690275/is-there-any-difference-between-traininghelper-and-greedyembeddinghelper-in-tens

https://www.tensorflow.org/tutorials/seq2seq