nyu-dl / dl4mt-tutorial

BSD 3-Clause "New" or "Revised" License
618 stars 249 forks source link

discrepancy between paper and code #63

Closed ethancaballero closed 8 years ago

ethancaballero commented 8 years ago

Starting at line 716 of nmt.py (https://github.com/nyu-dl/dl4mt-tutorial/blob/master/session2/nmt.py#L716) to compute \tilde{t}_{i}, why does the code put variables of each addend through a separate fflayer for each addend as opposed to just multiplying variables of each addend like in the last equation of A.2.2 DECODER section in https://arxiv.org/pdf/1409.0473.pdf: \tilde{t}_{i} =& U_o s_{i - 1} + V_o E y_{i-1} + C_o c_i.

orhanf commented 8 years ago

Hi @ethancaballero, thank you for pointing this out. Let me try to clarify the differences.

The last eq. of section A.2.2 from the original paper, is implemented here in this repo with three feed-forward layers, having linear (identity) activations. We first add their outputs and then apply a tanh non-linearity.

The original code of the paper also implements readout with three separate feed-forward layers, as you can see here. Adds them together, but then applies a maxout non-linearity. This is the first difference and in practice (in terms of BLEU), there will be no difference at the end if you replace it with a simple tanh.

The second difference is, in the original code from GroundHog, readout layer uses a single bias, but here in our implementation, we use three biases for each of the feed-forward layers. This is a very minor detail and again you will see no difference at the end.

The third difference is a more fundamental one and related with the ease of implementation. In the decoder when we compute \tilde{t}_{i}, we look at the current hidden state of the decoder s_i as opposed to looking at the previous hidden state s_{i-1}. Again, at the end you will not observe any difference in the automatic metrics.

There is one more difference in the decoder compared to the original implementation, where we use a slightly different conditional GRU layer with attention, and you can find the documentation here.