Closed zhbbupt closed 5 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I follow monotonic attention here: https://arxiv.org/pdf/1704.00784.pdf.
In tensorflow, it work well. (source code here: https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py. )
But in pytorch, it cannot work. Here is my source code. Could you take a look, please?
@r9y9