microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
33.98k stars 3.98k forks source link

[BUG] average_tensor coalesing does not take process_group into consideration for MoE #3521

Closed clumsy closed 1 year ago

clumsy commented 1 year ago

Describe the bug When decision about coalescing adjacent tensors is made right now it is currently only based on partition_id here which is simply target's rank in the range(dist.get_world_size(group=process_group)). This means that even though expert data parallel and data parallel both have partition_id=0 they could belong to different physical ranks in different process_groups.

To Reproduce A unit test will be added soon.

Expected behavior stage_1_and_2's average_tensor should detect param_groups belong to different process_groups, e.g. by remembering prev_process_group.

ds_report output

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-devel package with yum
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/azzhipa/.local/lib/python3.9/site-packages/torch']
torch version .................... 1.13.1+cu116
deepspeed install path ........... ['/home/azzhipa/workspace/DeepSpeed/deepspeed']
deepspeed info ................... 0.9.2+f7d71ec1, f7d71ec1, HEAD
torch cuda version ............... 11.6
torch hip version ................ None
nvcc version ..................... 11.6
deepspeed wheel compiled w. ...... torch 1.12, cuda 11.6

Screenshots N/A

System info (please complete the following information):

Launcher context Launching with deepspeed runner

Docker context N/A

Additional context N/A

clumsy commented 1 year ago

Closing since the fix has been merged to master branch. Please correct me if I'm wrong, @tjruwase.