spro / practical-pytorch

Go to https://github.com/pytorch/tutorials - this repo is deprecated and no longer maintained
MIT License
4.51k stars 1.1k forks source link

Question about Luong Attention Implementation #130

Open kyquang97 opened 5 years ago

kyquang97 commented 5 years ago

Hi @spro, i've read your implementation of luong attention in pytorch seq2seq translation tutorial and in the context calculation step, you're using rnn_output as input when calculating attn_weights but i think we should hidden at current decoder timestep instead. Please check it and can you provide explaination about it if i'm wrong image

beebrain commented 4 years ago

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

Coderx7 commented 4 years ago

@beebrain Please correct me if I'm wrong, but you are using the Lstm layer, instead of the lstm cell, so when each forward pass happens its a different sample not a different timestep on a single sample. You have no control over each timesteps separately here. what you get out of the RNN in this configuration is just a translation/sequence that has already gone through all timesteps!

syorami commented 4 years ago

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

beebrain commented 4 years ago

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

syorami commented 4 years ago

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

You do remind me! I'm also confused by the usage of outputs and hidden states in some attention implementations at first and they do actually share the same values. BTW, what about the LSTM? From Pytorch doc, the LSTM outputs hidden states as well as cell states. Are cell states used in attention or can I just consider using outputs and last hidden states equally?

beebrain commented 4 years ago

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

You do remind me! I'm also confused by the usage of outputs and hidden states in some attention implementations at first and they do actually share the same values. BTW, what about the LSTM? From Pytorch doc, the LSTM outputs hidden states as well as cell states. Are cell states used in attention or can I just consider using outputs and last hidden states equally?

In my opinion, You can use the hidden state output like GRU.

richardsun-voyager commented 4 years ago

I am also confused about why we can calculate all the attention scores for the source sentence using the previous hidden state and current input embedding.