microsoft / Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Other
1.9k stars 346 forks source link

Sequence Parallel is incompatible with Rotary Positional Embedding #385

Open anogkongda opened 6 months ago

anogkongda commented 6 months ago

I would like to finetune llama2 on long sequence data. (more than or eq 32K)

I follow the example below for sequence parallel:

https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples_deepspeed/deepspeed4science/megatron_long_seq_support/pretrain_gpt_30B_seq_parallel.sh

Sadly, the lm loss is NaN if I use rotary positional embedding. When I disable rotary positional embedding, the loss is ok even other parameters/arguments are the same as before.

anogkongda commented 6 months ago

After testing, I found the following:

  1. Reducing the model size (e.g., the original 32-layer LLaMA 7B reduced to 16 layers) prevents the loss from becoming NaN.

  2. Switching from BF16 to FP16 also prevents the loss from becoming NaN.

  3. When the loss becomes NaN, there's no protection mechanism, which causes all model parameters to turn into NaN.

  4. When Sequence Parallel is enabled, the BF16 Optimizer might overflow under certain circumstances, potentially due to computational errors.

  5. Observing the trend of loss change in FP16 training is still ongoing.

inkcherry commented 5 months ago

hi, @anogkongda, I also encountered the NAN issue and resolved it with this https://github.com/microsoft/Megatron-DeepSpeed/pull/399, could you try this. Can it solve your problem?

anogkongda commented 5 months ago

hi, @anogkongda, I also encountered the NAN issue and resolved it with this #399, could you try this. Can it solve your problem?

thank you, I will try this and report my result ASAP.

anogkongda commented 5 months ago

It doesn't work in my case. I'm trying more to make it correct.