huggingface / optimum-quanto

A pytorch quantization backend for optimum
Apache License 2.0
825 stars 61 forks source link

How to support activation 4bit quantization? #346

Open Ther-nullptr opened 2 weeks ago

Ther-nullptr commented 2 weeks ago

As mentioned in title.

dacorvo commented 2 weeks ago

Why 4-bit activations are not supported in quanto ?

Activations are quantized dynamically based on the recording of scales during calibration (unlike weights that are quantized statically), adding an extra cost to the inference. To make it worth it, we need to benefit from an accelerated matmul with the quantized weights. Unfortunately, at this stage the only accelerated operations available are for scalar quantization scales, that give terrible results with 4-bit weights (you need group-wise scales to preserve accuracy).

How could you still use 4-bit activations ?

You would need to modify some code here and there: