pytorch / ao

PyTorch native quantization and sparsity for training and inference
BSD 3-Clause "New" or "Revised" License
1.62k stars 179 forks source link

2:4 sparsity + PTQ(int8) model's inference #134

Open RanchiZhao opened 7 months ago

RanchiZhao commented 7 months ago

Are there any runnable demos of using Sparse-QAT/PTQ (2:4) to accelerate inference, such as applying PTQ to a 2:4 sparse LLaMA for inference acceleration? I am curious about the potential speedup ratio this could achieve. The overall pipeline might be: compressing the Weight matrix using 2:4 sparsity and quantizing it to INT8 format through PTQ/QAT. The Activation matrix should also be quantized to INT8 format through PTQ/QAT. After such processing, the main type of computation would be INT8*INT8. I would like to know if there is a tutorial document available, as I am a beginner in the field of quantization. Thx!

jcaip commented 7 months ago

Hi @RanchiZhao yes, please see #36, which has a benchmark script and the subclasses. It would be a good idea to add a beginner tutorial as well.

RanchiZhao commented 7 months ago

Hi @jcaip Thanks, I saw this before but didn't get a chance to look it over carefully, i'll do it now

RanchiZhao commented 7 months ago

oh, another thing, is this method available in LLM like LLaMA? And I wanna do this with Hugging Face's transformers, maybe tough to do.

jcaip commented 7 months ago

It should work for LLMs but the speedup characteristics depend on the matmul shapes and depending on what you are trying to do you will see more / less speedups. You will also need the model to be torch.compile traceable, as we use the torchao quantization workflow.

RanchiZhao commented 7 months ago

thanks a lot, another interesting thing is that, i want do PEFT(like LoRA) on the sparse&quant model. once we get the trained LoRA modules, we can add them into original models(the bf16 one), and do sparse&quant on it again. now we get a "sparse&quant aware training" model, we can use it to do inference.

Is this possible? we should make sure:

jcaip commented 7 months ago

Do you have some reference papers for PEFT + sparsity? I am also interested in that space as well, but have not been following actively.

It's impossible to say for sure without knowing the exact approach, but theoretically I believe some version of this should be possible, although accuracy is likely prohibitive. In terms of implementation though, this is not something directly supported by our APIs. You may be able to hack something together but we do not plan to add this functionality ATM. We may consider it down the line, so for anyone reading who's interested please react / +1 this comment.

RanchiZhao commented 7 months ago

no, AFAIK no, I' ll keep finding