bigscience-workshop / Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Other
1.34k stars 217 forks source link

DeepSpeed inference support for int8 parameters on BLOOM? #330

Closed pai4451 closed 2 years ago

pai4451 commented 2 years ago

Recently, HuggingFace transformers has a new feature on int8 quantization for all HuggingFace models. This feature could reduce the size of the large models by up to 2 without a high loss in performance. Is it possible for DeepSpeed inference to support int8 quantization for BLOOM? According to the DeepSpeed inference tutorial, DeepSpeed inference supports fp32, fp16, and int8 parameters. But when I tried BLOOM with the inference script and changed dtype=torch.int8 on line 194, an error will be raised.

site-packages/deepspeed/runtime/weight_quantizer.py”, line 163, in model_quantize
    return quantized_module, torch.cat(all_scales)
RuntimeError: torch.cat(): expected a non-empty list of Tensors

Any chance on DeepSpeed inference to support int8 quantization for BLOOM?

mayank31398 commented 2 years ago

@pai4451 https://www.deepspeed.ai/docs/config-json/#weight-quantization You can't use it that way. Please refer to this config. Let me know if it works ;)

mayank31398 commented 2 years ago

As an alternative, you can use it in HuggingFace too. I haven't tried it either though.

mayank31398 commented 2 years ago

@pai4451 https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/328#discussion_r954402510 you can use these instructions for quantization. However, this is a barebones script. I would encourage to wait for this PR: https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/328 Planning to add server + CLI inference + benchmarking support using accelerate and ds inference both. This will also support quantization should you need it.

pai4451 commented 2 years ago

@pai4451 #328 (comment) you can use these instructions for quantization. However, this is a barebones script. I would encourage to wait for this PR: #328 Planning to add server + CLI inference + benchmarking support using accelerate and ds inference both. This will also support quantization should you need it.

@mayank31398 I am running my server without Internet available so I can’t use snapshot_download from the hub. Also I am running on two nodes with 16 GPUs so I need a total of 16 shards checkpoints instead of the 8 shards provided by microsoft/bloom-deepspeed-inference-int8. I can convert by myself with the old FP16 weights but for int8 the following error occurs NotImplenentationError: Cannot copy out of meta tensors; no data. Any chance to solve that?

mayank31398 commented 2 years ago

Quantization with int8 requires knowledge distillation and might need significant compute. Read the zeroquant paper. I would suggest to get intenet access on the node if you can. I dont know how to quantize yourself. Int8 might work on a single node with 8 gpus for you. Can you give it a shot?

mayank31398 commented 2 years ago

Also, can you provide me the ds config you use to run on 16 gpus? I dont know how to reshard for pipeline parallel. Do you save the resharded weights? Or reshard every time?