microsoft / mup

maximal update parametrization (µP)
https://arxiv.org/abs/2203.03466
MIT License
1.24k stars 88 forks source link

About Learning rate decay #64

Open afcruzs opened 8 months ago

afcruzs commented 8 months ago

Hello, I have a small question regarding the MuP proxy model sweeps. Did you perform full learning rate decay to the 4b or 16b tokens in the proxy models mentioned in Appendix F.4 (gpt3)? Or did you decay the learning rate to the "real" number of tokens to be used in the target model? (Effectively, decaying very little in the proxy model sweeps)

It'd be interesting to know what did you do in the experiments in the appendix 4.3 (gpt3) and in general if this has any effect at all on the transferability (perhaps you have some empirical or theoretical insights), recommendations would be very welcome :)

yadandan commented 8 months ago

The same question. We also found that the optimal learning rates differ for different training steps across the widths. For instance, in the early stages of training, a larger learning rate performs better, but as training progresses, a smaller learning rate gradually overtakes it.

xidulu commented 2 months ago

@yadandan Just so I know more, when you say "performs better", are you referring to training error or test error?

Thanks