Describe the bug
Training a hf model (llama 3.1 with peft) on long context with sequence_parallel_size > 1 works only up until zero stage 2.
If I set "stage" to 3 I get the following error:
So maybe there is an issue with the world_size definition when running zero3 (though even when fixing this to the correct world size and device_mesh the same error occurs)?
To Reproduce
Running the example from:
DeepSpeedExamples/post_training/sequence_parallelism/test_ulysses.py
with:
Expected behavior
ZeRo-3 should work as stated in the official blog post.
ds_report output
DeepSpeed general environment info:
torch install path ............... ['/root/miniconda3/envs/finetuning/lib/python3.10/site-packages/torch']
torch version .................... 2.4.1+cu121
deepspeed install path ........... ['/root/miniconda3/envs/finetuning/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.15.1, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.1
shared memory (/dev/shm) size .... 321.31 GB
System info:
OS: [e.g. Ubuntu 18.04]
GPU count and types [e.g. two machines with x8 A100s each]
Interconnects (if applicable) [e.g., two machines connected with 100 Gbps IB]
Python version
Any other relevant info about your setup
Launcher context
I am using the deepspeed launcher.
Thanks for the help!
Even if this not officially supported I would be thankful for some pointers, so I can implement something on my own.
For context:
We want to train a 70B model on seq length of 60k. 8B already works with Ulysses, but without zero-3 I think 70B is impossible on a single node.
@Xirid, ZeRO stage 3 is currently not supported in DeepSpeed long context parallelism (Ulyesses). ZeRO3 support is on our roadmap, contributions are welcome!
Describe the bug Training a hf model (llama 3.1 with peft) on long context with sequence_parallel_size > 1 works only up until zero stage 2. If I set "stage" to 3 I get the following error:
I also had to disable this assertion when switching over from zero 1 to 3:
So maybe there is an issue with the world_size definition when running zero3 (though even when fixing this to the correct world size and device_mesh the same error occurs)?
To Reproduce Running the example from: DeepSpeedExamples/post_training/sequence_parallelism/test_ulysses.py with:
on the hf pr: https://github.com/huggingface/transformers/pull/32305
Expected behavior ZeRo-3 should work as stated in the official blog post.
ds_report output
System info:
Launcher context I am using the deepspeed launcher.
Thanks for the help! Even if this not officially supported I would be thankful for some pointers, so I can implement something on my own. For context: We want to train a 70B model on seq length of 60k. 8B already works with Ulysses, but without zero-3 I think 70B is impossible on a single node.