microsoft / mup

maximal update parametrization (µP)
https://arxiv.org/abs/2203.03466
MIT License
1.37k stars 94 forks source link

MuAdam not adjusting lr for output weights #7

Closed zhuzilin closed 2 years ago

zhuzilin commented 2 years ago

Hi, thank you for your great project for hyperparameter tuning!

As our team migrating the mup to other training framework, it occurs to us that the MuAdam does not scale the learning rate for output weights as the TP5 paper illustrated:

image

https://github.com/microsoft/mup/blob/c9d67001c47ae254ea4b7e26146ffd059520b6ba/mup/optim.py#L55-L70

It seems to us that only the lr of hidden layer (the layer with 2 inf dimensions) is scaled w.r.t fanin, but the output weight is ignored. We wonder if this is intended. Thank you!

edwardjhu commented 2 years ago

Hi zhuzilin,

Thanks for your question.

There are many equivalent ways to implement muP, and you are right that what is implemented in this package is not described by the table you attached. Instead, you want to look at Table 8. image

We also noted in the caption of the table you attached that "also see Table 8 for a µP formulation that is easier to implement (and compatible with input/output weight sharing)." Please let us know if this answers your question!

zhuzilin commented 2 years ago

@edwardjhu Thank you edward! You answer solved my confusion. Just for a double check, if I need to implement a custom output layer, the table 8 means that I need to initialize the output weight with std 1 and always divide the output of the layer with fanin, right?

edwardjhu commented 2 years ago

That's right!

thegregyang commented 2 years ago

@zhuzilin I want to fill in more information here that may have been lost in the subcontext. We don't want you to use exactly std=1 and divide the output layer by exactly fanin. You should interpret the 1 as O(1) and fanin as O(fanin). In other words, this table just says that, when you double your fanin, the multiplier on the last layer should be halved, but the initialization should be unchanged. The exact numbers you use for initialization and the multiplier should be tuned from some base model. This discussion applies to all other parameters in the table.

Regarding the output layer specifically, we actually recommend you initializing it at 0 if possible (assuming you don't have tricky weight tying btw input/output weights). This should not affect the performance of your model after training, but it will typically improve the transfer quality. You can see section D.2 in the paper for more details.