Open psyv282j9d opened 1 month ago
Oops. I should add the imatrix
calculation requires a data file. The outcome of the previously referenced discussion seems to have settled on the file linked in this comment.
Direct link to groups_merged-enhancedV3.txt
Hello @psyv282j9d!
This sounds like a great feature which I would love to add. I have begun work on tracking memory usage in #392. I will look into applying the imatrix quants, from what I understand it is a different quantization standard?
My understanding (IANAMLE), is that it calculates a matrix of weights, which adjusts the quantized values of "important" tensors. This leads to a quantized model that more closely mimics the original.
From the PR:
We can then use these <a_i^2> as importances (weights) in a weighted RMSE minimization when quantizing the tensor.
So, not a different quantization method, just really helpful tweak on top of the existing gguf quant methods.
(All errors my own :-))
Looking over ISQ (based on your previous ask), I found a few things missing that I've learned are helpful via trial and error.
imatrix
, If you look at the discussion here, you can see that calculating the Importance Matrix prior to quantization, can be used to offset some of the negative effects of quantization. In particular, this comment gives a great walkthrough of which tools to use to calculate the imatrix, then how to use it when quantizing.Also, one of the key benefits of Ollama (Go wrapper around llama.cpp) is in
llm/memory.go
. In the functionEstimateGPULayers
, it calculates, based on available VRAM (or system RAM for metal) how many layers can be offloaded to the GPU. This number is then passed to the--n_gpu_layers
option ofllama.cpp
.What are the chances of incorporating these ideas into ISQ? It would be great to go from safetensors /
bf16
on disk to automagically optimal memory loading for inference. :-)