google / trax

Trax — Deep Learning with Clear Code and Speed
Apache License 2.0
8.01k stars 813 forks source link

Does the Reformer have more parameters than the baseline? #1749

Open alexm-gc opened 2 years ago

alexm-gc commented 2 years ago

Regarding Reformer: paper | code

From paper:

.. show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both x1 and x2 have size d_model.

I see how the parameters of Attention and MLP does not increase. But what about (1) the embedding layer and (2) the final projection layer?

Question 0. Why does the parameters of the initial embedding layer not increase if we double d_model?.