microsoft / mup

maximal update parametrization (µP)
https://arxiv.org/abs/2203.03466
MIT License
1.24k stars 88 forks source link

Warmup schedule when changing the number of tokens/steps (GPT-3 experiment detail) #51

Open sashaDoubov opened 1 year ago

sashaDoubov commented 1 year ago

Hi! I had a few questions regarding the warmup schedule when changing the number of training tokens, as done in the GPT-3 experiments in your work.

  1. For the GPT-3 sweeps, is the batch size kept the same between the proxy model and target model?

  2. For the 40M proxy model, which was trained for 4B and 16B tokens respectively compared to the 300B tokens for the full 6.7B param model, is the warmup period set as a proportion of the total training steps (ex. 1% of training steps) or as an absolute number of steps (ex. 1B steps)?