microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
34.69k stars 4.04k forks source link

ZeroQuant quantization kernels and LKD #2207

Open sdpmas opened 2 years ago

sdpmas commented 2 years ago

Hi,

I was trying out the compression library for ZeroQuant quantization (for GPT-J model). While I was able to compress the model, I didn't see any throughput/latency gain from the quantization during inference. I have a few questions regarding this:

injection_policy={gptj_transformer: 
                          module_inject.replace_policy.HFGPTJLayerPolicy}

model = deepspeed.init_inference(
    model,
    mp_size=world_size,
    dtype=torch.int8,
    quantization_setting=2,
    replace_with_kernel_inject=True,
    injection_policy=injection_policy,
)

Any help would be appreciated.

gsujankumar commented 2 years ago

Looks like the inference kernels for zeroquant is not released.

sdpmas commented 2 years ago

@gsujankumar have you by any chance been able to quantize gpt-x models like gpt-2 or gpt-j?

yaozhewei commented 2 years ago

Hi,

The engine of ZeroQuant inference is not released yet. The code example in DeepSpeed-Example is only to help verify the accuracy of ZeroQuant.

The kernel/engine released is on our calendar and we are actively working on it to make it compatible for various models. Please stay tuned.

For LKD, we will also release it soon.

For the last question, the code for training or accuracy testing is different than the final inference engine. Here, everything is simulated, so we can do quantization aware training or other things

sdpmas commented 2 years ago

thanks for replying back @yaozhewei. Do you think you could provide any estimation on when the ZeroQuant inference will be released? any rough estimation would help!

xk503775229 commented 2 years ago

i have the same questions, is there any guide to running inference on compressed models(especially ZeroQuant)? Any help would be appreciated.

xk503775229 commented 1 year ago

hi ,when the ZeroQuant inference will be released?

david-macleod commented 1 year ago

@yaozhewei any news on this?

yaozhewei commented 1 year ago

@david-macleod LKD example is just released (not merged yet): https://github.com/microsoft/DeepSpeedExamples/pull/214

For kernel, please stay tuned

david-macleod commented 1 year ago

Thanks @yaozhewei! Do you know whether there is a rough timeline for this? e.g. 1 month, 6 months, 1 year? It would be very useful to know as we'd like to decide where to wait or explore other options. Thanks again!

HarleysZhang commented 1 year ago

I have the same problem, after zero-quant with DeepSpeed-Example reposity's code, but didn't see any throughput/latency gain from the quantization during inference, it only have model size decrease. the inference kernels for zeroquant have released at now?

aakejiang commented 1 year ago

@yaozhewei any update on this? Is the engine of ZeroQuant inference released?

Moran232 commented 1 year ago

@yaozhewei the newest deepspeed>=0.9.0 can't run any model int INT8, many issue opened not solved yet. Can you tell us which version of deepspeed can run int8 model? I just want to reproduce the result in your paper ZeroQuant

huangzl19 commented 1 month ago

@yaozhewei any update on this? Is the engine of ZeroQuant inference released?