mit-han-lab / llm-awq

[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
MIT License
2.38k stars 184 forks source link

AWQ and SmoothQuant #130

Open DavidePaglieri opened 9 months ago

DavidePaglieri commented 9 months ago

Hi, first of all congrats for the great work!

I wanted to ask why isn't there a more thorough comparison between AWQ and SmoothQuant in the paper. To my understanding, they both work using a similar intuition, scaling up weights and scaling down activations. SmoothQuant uses W8A8, which according to #56 is faster for large batch sizes since it can use faster matrix multiplication, while AWQ uses W4A16 (FP activations), which is faster for small batch sizes (but slower for larger ones as you have to dequantize the weights every time in order to perform matrix multiplication in FP16).

However how do they compare in perplexity benchmark and why were they not compared in the paper since they come from the same authors? Am I missing something?

Thank you in advance for your help!

DavidePaglieri commented 8 months ago

Hi, would it be possible to get an answer on this? Still curious about it...

tanguofu commented 8 months ago

same question~