Low-bit quants often benefit from a fudge factor applied to the (super-)block scale. When I was developing IQ2_K and IQ3_K it was faster to change the fudge factor in ggml-cuda/convert.cu and recompile than to change it in the quantization function and re-quantize. But when I was ready, I forgot to move the IQ2_K and IQ3_K fudge factors to quantization, so they remained in the CUDA dequantization function (and hence weren't applied anywhere else). This PR fixes this.
Low-bit quants often benefit from a fudge factor applied to the (super-)block scale. When I was developing
IQ2_K
andIQ3_K
it was faster to change the fudge factor inggml-cuda/convert.cu
and recompile than to change it in the quantization function and re-quantize. But when I was ready, I forgot to move theIQ2_K
andIQ3_K
fudge factors to quantization, so they remained in the CUDA dequantization function (and hence weren't applied anywhere else). This PR fixes this.