graykode / nlp-tutorial

Natural Language Processing Tutorial for Deep Learning Researchers
https://www.reddit.com/r/MachineLearning/comments/amfinl/project_nlptutoral_repository_who_is_studying/
MIT License
14.03k stars 3.9k forks source link

Seq2Seq(Attention) may have a mistake #81

Closed NKUmianman closed 11 months ago

NKUmianman commented 11 months ago

Seq2Seq attention score should be calculated by the hidden state of encoder and decoder,instead of output of encoder and decoder