microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
34.8k stars 4.05k forks source link

[BUG] Unused params lead to "still have inflight params" error #4094

Open tiwargau opened 1 year ago

tiwargau commented 1 year ago

Bug description Context: Running inference on a multi-modal LLM , at each decoding step parts of the network are used and depends on the input modality at each step. In my second step, deepspeed goes ahead and fetches part of the network that ends up not being used. The code does assume that this can happen and correctly invalidates the trace. However, for the params that were prefetched but never used, at the end of the step, these are detected as in-flight and result in the RuntimeError(f"still have inflight params").

To Reproduce My setup is a bit involved. I am thinking it is clear from the description what the issue is. However, if the team feels like they can benefit from a simple reproduction, I can work on creating one. Please let me know.

Expected behavior I would have expected that when we notice the order of params isn't the same as before, it would be reasonable to also not demand that all the parameters be used. Right now, we tolerate different ordering but require that all the params previously used (hence prefetched) need to be used at some point.

ds_report output

Setting ds_accelerator to cuda (auto detect)--------------------------------------------------DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-devel package with yum
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11.6'
DeepSpeed general environment info:
torch install path ............... ['/home/ec2-user/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch']
torch version .................... 1.13.0
deepspeed install path ........... ['/home/ec2-user/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/deepspeed']
deepspeed info ................... 0.10.0, unknown, unknown
torch cuda version ............... 11.7
torch hip version ................ None
nvcc version ..................... 11.6
deepspeed wheel compiled w. ...... torch 1.13, cuda 11.7

System info (please complete the following information):

alexwangmac commented 1 year ago

Have you solved the problem? My situation is exactly the same as yours.

tiwargau commented 1 year ago

Hi @alexwangmac I haven't really solved this problem, just worked around it with setting "stage3_prefetch_bucket_size": 0. This is not an ideal solution as you lose the efficiency.

Hoping deepspeed team can help with this soon.

hatrexltd commented 1 year ago

Same

haixpham commented 9 months ago

Hi @alexwangmac I haven't really solved this problem, just worked around it with setting "stage3_prefetch_bucket_size": 0. This is not an ideal solution as you lose the efficiency.

Hoping deepspeed team can help with this soon.

I ran into the same problem and your fix worked! Indeed the problem arises if not all model params are used during inference.

siddk commented 9 months ago

Any update on this? Running into the same issue when I have unused parameters for a given forward pass!

haixpham commented 9 months ago

Any update on this? Running into the same issue when I have unused parameters for a given forward pass!

In the config json, set "stage3_prefetch_bucket_size": 0, that should work

andre-bauer commented 8 months ago

In the config json, set "stage3_prefetch_bucket_size": 0, that should work

While this might "work" this still not solves the problem for example with mixtral, since this kind of MoE does not properly work with deepspeed. Also I tried to use mixtral on a multi GPU setup and instead of getting this error message the process just hangs infinitely, most likely because parameters are fetched but not used and thus not released. Even with prefetch_bucket_size=0

BBerabi commented 8 months ago

In the config json, set "stage3_prefetch_bucket_size": 0, that should work

While this might "work" this still not solves the problem for example with mixtral, since this kind of MoE does not properly work with deepspeed. Also I tried to use mixtral on a multi GPU setup and instead of getting this error message the process just hangs infinitely, most likely because parameters are fetched but not used and thus not released. Even with prefetch_bucket_size=0

I have exactly the same issue, when will Mixtral support be added to deepspeed?

tohtana commented 8 months ago

(I posted a similar comment on #4808) I will investigate this issue, but you can use DeepSpeed-FastGen (DeepSpeed-MII) for text generation. The example is available here. I verified that Mixtral works just by modifying the model name. It is easier to use "non-persistent" mode for testing purpose, but "persistent" mode will give you the best performance. Please refer to DeepSpeed-MII for more details.

tohtana commented 8 months ago

Hi everyone,

4966 should have fixed this issue. You can find working example there.

The PR was already merged into master. Please feel free to try, but I still recommend using DeepSpeed-FastGen for text generation.

matthewdm0816 commented 6 months ago

Hi, I also found this problem also in my experiments. It seems in generation some parameters are not used. Except the PR, a simple workaround can be passing a dummy input to invoke that unused parameter in inference. While warnings like "Invalidate trace cache @ step 1: expected module 1704, but got module 1703" still appears, but the training and generation seems to be fine.