Closed Downtown-Case closed 2 months ago
Does not appear to be a quantized KV cache issue, FP16 cache returns the same garbled english.
Brand new 4bpw quantization also returns the same garbled english.
Can you tell me more about how you're prompting the model to get garbage? If I try your 4.1bpw version, it seems to be working fine, both in 0.2.2 master and 0.2.2 dev, with FP16 or Q4 cache. Doesn't seem to break either way.
Is it possible you're running low on VRAM or something?
I have a few gigabytes of vram to spare loading it at short context.
If I load it into exui, even with the default prompt of "Once upon a time" it just starts looping and looping garbled english with the 4.1 quant, but the 3.75 is fine.
...I know, lol. I'm currently trying to reproduce it with a super minimal exllama script, and working my way up from there.
...I'm a moron. I overwrote an ancient test model in exui, and it turns out RoPE scale was set at 4.0.
I appreciate the quick response anyway!
For reference, Qwen 2.5 doesn't seem to mind Q4 cache like Qwen 2 does.
It actually does. Try 7B with Q4, start tokens fine then quickly outputting garbage. But Q6+ performance does not seems to matter Q6 Q8 F16 similar rate of answer correctly. 14B+ Q4 is (mostly) fine.
You're talking about weights quantization, not cache right?
OS
Linux
GPU Library
CUDA 12.x
Python version
3.12
Pytorch version
2.3, 2.4, 2.6 nightly, flash-attn and xformers built from source, exllama built from master branch
Describe the bug
Qwen 2.5 34B returns garbage output with certain quantizations above 4bpw, but not ones below 4bpw.
Possibly related to #621 or #627
What's unusual is that lower quantizations work, but higher ones do not.
These two quants work for me:
https://huggingface.co/Downtown-Case/Qwen_Qwen2.5-32B-Base-exl2-3.92bpw
https://huggingface.co/Downtown-Case/Qwen_Qwen2.5-32B-Base-exl2-3.75bpw
While this one (and a 4.04 I had locally) return garbage:
Here's an example command I used for quantization:
python convert.py --in_dir "/home/down/Models/Raw/Qwen_Qwen2.5-32B" -o "/home/down/FastStorage/scratch2" -m "/home/down/Models/calibration/Q32-base.json" -b 4.0 -hb 6 -cf "/home/down/Models/exllama/Qwen_Qwen2.5-32B-exl2-4.0bpw" -nr --fast_safetensors
Re-doing the calibration from scratch doesn't seem to make a difference, and that same calibration was used for the sub 4bpw quantizations.
I tried quantizing at 4.1/4.04 bpw in multiple pytorch environments, with different versions of flash-attention installed, remaking the measurements json from scratch, and so on. My test is an 75K context story at Q4 cache quantization, simply continuing it in exui. Again, the sub 4bpw quantization continue it coherently while the ones over 4bpw return garbled english, with no errors in the console.
I'm running through more troubleshooting steps now (like trying different levels of cache quantization and making more quantizations), but figured I'd post early since others seem to be having issues with Qwen.
Acknowledgements