Open agunapal opened 1 year ago
Hi @agunapal,
Can you please try this PR and see if the issue is resolved? Thanks, Reza
@RezaYazdaniAminabadi Thanks, I tried it. It doesn't solve the issue.
I am using checkpoint loading.
It asserts here https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/module_inject/replace_module.py#L547
since I am setting "replace_with_kernel_inject": true
and there are no optimized kernals defined for falcon .
It seems checkpoint loading doesn't work if I set this to false for even opt models
I get this error when I call init_inference
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
2023-06-28T18:05:37,505 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - NotImplementedError: Cannot copy out of meta tensor; no data!
Describe the bug Getting the following error when I try to load falcon-40b model. The same config works for opt-30b
Config
Error happens when I call this function
[2023-06-24 00:55:04,352] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
DeepSpeed C++/CUDA extension op report
NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op.
JIT compiled ops requires ninja ninja .................. [OKAY]
op name ................ installed .. compatible
[WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] utils .................. [NO] ....... [OKAY]
DeepSpeed general environment info: torch install path ............... ['/home/ubuntu/anaconda3/envs/deepspeed/lib/python3.10/site-packages/torch'] torch version .................... 2.0.1+cu117 deepspeed install path ........... ['/home/ubuntu/anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed'] deepspeed info ................... 0.9.4, unknown, unknown torch cuda version ............... 11.7 torch hip version ................ None nvcc version ..................... 11.7 deepspeed wheel compiled w. ...... torch 2.0, cuda 11.7