Describe the bug
When decision about coalescing adjacent tensors is made right now it is currently only based on partition_idhere which is simply target's rank in the range(dist.get_world_size(group=process_group)). This means that even though expert data parallel and data parallel both have partition_id=0 they could belong to different physical ranks in different process_groups.
To Reproduce
A unit test will be added soon.
Expected behaviorstage_1_and_2's average_tensor should detect param_groups belong to different process_groups, e.g. by remembering prev_process_group.
ds_report output
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-devel package with yum
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/azzhipa/.local/lib/python3.9/site-packages/torch']
torch version .................... 1.13.1+cu116
deepspeed install path ........... ['/home/azzhipa/workspace/DeepSpeed/deepspeed']
deepspeed info ................... 0.9.2+f7d71ec1, f7d71ec1, HEAD
torch cuda version ............... 11.6
torch hip version ................ None
nvcc version ..................... 11.6
deepspeed wheel compiled w. ...... torch 1.12, cuda 11.6
Screenshots
N/A
System info (please complete the following information):
Describe the bug When decision about coalescing adjacent tensors is made right now it is currently only based on
partition_id
here which is simply target's rank in therange(dist.get_world_size(group=process_group))
. This means that even though expert data parallel and data parallel both havepartition_id=0
they could belong to different physical ranks in differentprocess_group
s.To Reproduce A unit test will be added soon.
Expected behavior
stage_1_and_2
'saverage_tensor
should detectparam_group
s belong to differentprocess_group
s, e.g. by rememberingprev_process_group
.ds_report output
Screenshots N/A
System info (please complete the following information):
Launcher context Launching with
deepspeed
runnerDocker context N/A
Additional context N/A