Closed DonghyungKo closed 5 years ago
Hi thanks for sharing your codes.
I've had read your seq2seq implementation and I was wondering about the RNN Encode-Decode model.
in the paper, 'Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation'
They say
proposed gating unit
and I couldn't find the new hidden-state activation function in your code.
Do you have any plan to add the proposed activation process? or is it okay to just skip the parts?
thank you so much in advance
Yes. but, Contribution is always open
thank you
I was just wondering whether it matters or not
straight-forward code btw, really nice work thanks
Hi thanks for sharing your codes.
I've had read your seq2seq implementation and I was wondering about the RNN Encode-Decode model.
in the paper, 'Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation'
They say
proposed gating unit
and I couldn't find the new hidden-state activation function in your code.
Do you have any plan to add the proposed activation process? or is it okay to just skip the parts?
thank you so much in advance