ikawrakow / ik_llama.cpp

llama.cpp fork with additional SOTA quants and improved performance
MIT License
89 stars 6 forks source link

Cleanup scale fudge factors #81

Closed ikawrakow closed 1 month ago

ikawrakow commented 1 month ago

Low-bit quants often benefit from a fudge factor applied to the (super-)block scale. When I was developing IQ2_K and IQ3_K it was faster to change the fudge factor in ggml-cuda/convert.cu and recompile than to change it in the quantization function and re-quantize. But when I was ready, I forgot to move the IQ2_K and IQ3_K fudge factors to quantization, so they remained in the CUDA dequantization function (and hence weren't applied anywhere else). This PR fixes this.