wejoncy / QLLM

A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ, and export to onnx/onnx-runtime easily.
Apache License 2.0
145 stars 14 forks source link

AWQ Marlin Quantization #123

Closed Abhis-123 closed 3 months ago

Abhis-123 commented 3 months ago

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 2024-06-19 14:51:42,049 - qllm - INFO - loading model from Weyaxi/Einstein-v6.1-Llama3-8B Loading checkpoint shards: 100%|██████████████████| 4/4 [00:08<00:00, 2.06s/it] 2024-06-19 14:51:50,575 - qllm - INFO - loading dataset from pileval 2024-06-19 14:51:50,576 - qllm - INFO - found cached dataloader in /tmp/qllm_vroot/_WeyaxiEinstein-v61-Llama3-8B_pileval_16_2048_0_dataloader.pt Starting ... Ready. Running AWQ...: 100%|███████████████████████████| 32/32 [06:47<00:00, 12.73s/it] 2024-06-19 14:58:38,872 - qllm - WARNING - Failed to import qllm.ort_ops 2024-06-19 14:58:38,873 - qllm - WARNING - Failed to import qllm.awq_inference_engine awq_inference_engine not found, will skip it. ort_ops is not installed. Will fallback to Torch Backend marlin_cuda is not installed. marlin_cuda is not use Replacing linear layers...: 100%|█████████████| 454/454 [00:06<00:00, 66.76it/s] Packing weights....: 0%| | 0/224 [00:00<?, ?it/s] Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.8/dist-packages/qllm/main.py", line 6, in main() File "/usr/local/lib/python3.8/dist-packages/qllm/run.py", line 78, in main model_quanter.run(args) File "/usr/local/lib/python3.8/dist-packages/qllm/auto_model_quantization.py", line 217, in run model = self.pack_model(model, quantizers, args.pack_mode) File "/usr/local/lib/python3.8/dist-packages/qllm/auto_model_quantization.py", line 94, in pack_model qlayers[name].pack(attention_layers[name], scale, zero, g_idx) File "/usr/local/lib/python3.8/dist-packages/qllm/modeling/q_layers/quant_linear_marlin.py", line 99, in pack assert zeros is None or torch.all(zeros == 8) AssertionError

wejoncy commented 3 months ago

Hi.

Thanks for filing the issue.

Marlin only support symmetry quantization, so add --sym after the cli command will work for you.