horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
https://arxiv.org/abs/2305.11627
Apache License 2.0
803 stars 89 forks source link

Adding quantization #20

Open Duncan1115 opened 1 year ago

Duncan1115 commented 1 year ago

If I use the multiple strategies such as GPTQ + LLM-Pruner + LoRA, maybe the compressing ratio of LLM will be greatly improved with an acceptable performance?

MarlNox commented 1 year ago

I assume the correct way to do it would go something like:

  1. (optional) Increase size and topic breadth of LLM-Pruner Corpus
  2. LLM-Pruner
  3. LoRA/QLoRa
  4. GPTQ This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.
horseee commented 1 year ago

I assume the correct way to do it would go something like: 0. (optional) Increase size and topic breadth of LLM-Pruner Corpus

  1. LLM-Pruner
  2. LoRA/QLoRa
  3. GPTQ This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.

Thanks for your kind response! We also assume that if quantization needs to be applied, the correct path is as you listed. One of the reasons is that if the pruning need to be performed under a CPU, certain operations, such as SiLU, are not supported on the CPU with FP16 and below. If you apply quantization first and then proceed with pruning, it could result in quantized weights being readjusted back to fp32.

Duncan1115 commented 1 year ago

@horseee Hi, thanks for the good suggestion. And may I ask why the paper don't compare the results bewteen pure quantizing and pure pruning?

horseee commented 1 year ago

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

Duncan1115 commented 1 year ago

Thanks a lot! My question comes from the quantizing method such as GPTQ/AWQ can have a better performance with large compressing ratio than pruning method... Your answer deeply helped me~

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

77h2l commented 1 year ago

@horseee hi, I have two questions hope you could reply,thx:

  1. does the model pruned by llm-pruner or other pruner tricks, could have a better inference performance under fp16;
  2. how could it be achieved to run a model pruned by llm-pruner, and then use gptq or other ways to quant the model to int8.
horseee commented 1 year ago

Hi. We conducted a quick experiment and here are the inference performance:

Pruning Ratio #Param Memory Latency Speedup BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA Average
LLaMA-7B 6.74B 12884.5MiB 69.32s 1x 73.18 78.35 72.99 67.01 67.45 41.38 42.40 63.25
LLM.int8() 6.74B 6777.7MiB 76.20s 0.91x 73.36 78.18 73.01 66.93 67.47 40.87 41.80 63.09
LLaMA-5.4B 5.47B 10488.4MiB 58.55s 1.18x 76.57 77.37 66.60 65.82 70.62 40.70 38.80 62.36
LLaMA-5.4B + LLM.int8() 5.47B 5444.37MiB 63.10s 1.09x 76.39 76.71 66.62 66.46 70.54 40.19 39.20 62.30

The latency is tested on the test set of Wikitext2. LLM.int8() slows down the inference of the LLaMA-7B model in our case, as is also mentioned in the paper of LLM.int8() with the model size of 6.7B.

77h2l commented 1 year ago

@horseee hi,thx for your kind reply. Actually I'm not intend to compare the performance of pruner and quantization, as they are two different ways to compress the model.I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

horseee commented 1 year ago

I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

In my experiment above, the pruned model is quantized following the instruction of bitsandbytes. I didn't try GPTQ since it seems more complicated if the model is not a standard model that cannot be loaded from .from_pretrained().