Open Ther-nullptr opened 2 weeks ago
Activations are quantized dynamically based on the recording of scales during calibration (unlike weights that are quantized statically), adding an extra cost to the inference. To make it worth it, we need to benefit from an accelerated matmul with the quantized weights. Unfortunately, at this stage the only accelerated operations available are for scalar quantization scales, that give terrible results with 4-bit weights (you need group-wise scales to preserve accuracy).
You would need to modify some code here and there:
As mentioned in title.