google / gemma.cpp

lightweight, standalone C++ inference engine for Google's Gemma models.
Apache License 2.0
5.9k stars 499 forks source link

[Feature request] Add quantization methods #17

Open namtranase opened 6 months ago

namtranase commented 6 months ago

It would be awesome if the repo supported quantization methods. Reference: k-quants

chenxiaoyu3 commented 6 months ago

waiting for quantization model +1.

austinvhuang commented 6 months ago

Understood, the -sfp models are 8 bit weights, but I understand people are interested in more aggressive quantization.

BTW for just decreasing the memory footprint there was a commit that makes the kv cache preallocation smaller + configurable https://github.com/google/gemma.cpp/commit/129e66ada2b4e461bdf28b88b70cd2465cb213e4 - but I get aggressive quantization benefits go beyond that.

Working on a list of priorities + call-for-contributions, will post more soon.

jan-wassenberg commented 6 months ago

FYI we do support an experimental 4.5 bit quantization method (NUQ), but those weights are not available on Kaggle. We can more easily support this once we are able to ingest other weight formats (#11).

jan-wassenberg commented 1 month ago

An update on this, we do have the ability to import from pytorch weights. Work is still ongoing on evaluating the nonuniform 4.5-bit format.

I'm increasingly concerned about uniform integer quantization in the style of k quants. Recent work such as https://arxiv.org/pdf/2407.03211 points out that human raters detect much more harm than automated metrics, especially in non-English languages, even for int8. Another paper also reports concerns after human evals, apparently also with int8.