Hi , first of thanks for creating this repository and sharing it with community ..I had some confusion on reading the Research paper & code.
In page 5 of your research paper..under content based attention , you have mentioned:
-hi as the hidden state of the encoder at the current time step i∈ {0,1,...,N-1}
but shouldn't hi be the output of the encoder
-st as the hidden state of decoder at current time step t∈ {0,1,...,T-1}, where T is the maximum length of decoding characters
I believe this is implemented at line 48 of seq2seq.py How come the t∈ {0,1,...,T-1} , how can t be 0 ,as there will never case where t is 0 .
Hi , first of thanks for creating this repository and sharing it with community ..I had some confusion on reading the Research paper & code. In page 5 of your research paper..under content based attention , you have mentioned: