d2l-ai / d2l-en

Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
https://D2L.ai
Other
23.92k stars 4.36k forks source link

Typo in Figure d2l-en/chapter_attention-mechanisms/seq2seq-attention.md #810

Closed sergulaydore closed 4 years ago

sergulaydore commented 4 years ago

The content should be context and also the outputs of decoder are coded as encoder outputs.

goldmermaid commented 4 years ago

Hi @sergulaydore , good catch! I have made changes in the PR https://github.com/d2l-ai/d2l-en/pull/882 Please close this issue if it fits your thoughts. :)

astonzhang commented 4 years ago

Closing since it's fixed.