In cases where the input sequence is a list of sentences,
we concatenate the sentences into a long list of word
tokens, inserting after each sentence an end-of-sentence token.
I went through the code it's bit different here. What is actually happening ? Each sentence separately feed through a RNN ?
In the input module how to embed the information that coming from the context? In some papers it has mentioned
The will concatenate all the words in the context and add an EOS at the end of each sentence and feed in through r RNN with GRU units. Then take the hidden states at each time step. Ask Me Anything:Dynamic Memory Networks for Natural Language Processing
I went through the code it's bit different here. What is actually happening ? Each sentence separately feed through a RNN ?