epfLLM / Megatron-LLM

distributed trainer for LLMs
Other
529 stars 76 forks source link

Replace 1F1B with ZB-H1 #93

Open QPHutu opened 8 months ago

QPHutu commented 8 months ago

The change is a quick implementation to replace 1F1B with ZB-H1 proposed in Zero Bubble Pipeline Parallelism, which reduces the bubbles in pipeline parallelism.

QPHutu commented 8 months ago

The paper been accepted by ICLR 2024.

The key idea is to split the backward computation into two parts, one that computes gradient for the input and another that computes for the parameters. By rescheduling the parameters' gradient computation, we can have get a better efficiency without scrificing anything.

image image
Dylancer1998 commented 8 months ago

May I ask what led you to commit to this repository over the original one? Just curious about your thoughts! @QPHutu

QPHutu commented 8 months ago

Thanks for the reply. There are 2 main reasons.

  1. We have one internal team using this repo to train LLM. So to better support their training, we decide to merge this commit to upstream.
  2. We also have plans to merge our new scheduling methods to the original Megatron, not only ZB-H1, but also all other schedulers. However, the whole code changes are quite complicated, so both us and Nvidia want to be careful about that. To make it simpler, we want to push ZB-H1 to the community first.
martinjaggi commented 8 months ago

thanks for the PR!

for merging we'd like to understand the impact a bit better. did you verify how model parallel training of the current models supported here (such as llama2) is impacted by your change? (in terms of speed, stability and also verify model behavior is unchanged?)

indeed could be nice to also hear the feedback from the Nvidia/Megatron-LM team if you get a chance