mit-han-lab / llm-awq

[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
MIT License
2.08k stars 150 forks source link

Weight int4 quantization, but actually it is int16 #162

Open dongxuemin666 opened 3 months ago

dongxuemin666 commented 3 months ago

Hi I used weight int4, but when I run inference, finding that weight is actually int16, is my pipeline wrong ![Uploading 屏幕截图 2024-03-19 112200.png…]()

dongxuemin666 commented 3 months ago
屏幕截图 2024-03-19 112200

image seems to be broken, please see this one

dongxuemin666 commented 3 months ago

below is my script to do quant

python -m awq.entry --model_path $MODEL \ --w_bit 4 --q_group_size 128 \ --run_awq --dump_awq awq/llava_w4/llava-v1.6-vicuna-7b-w4-g128.pt

python -m awq.entry --model_path $MODEL \ --w_bit 4 --q_group_size 128 \ --load_awq awq/llava_w4/llava-v1.6-vicuna-7b-w4-g128.pt \ --q_backend real --dump_quant awq/llava_w4/llava-v1.6-vicuna-7b-w4-g128-awq.pt

dongxuemin666 commented 3 months ago

I get this, weight is fake int4, in calculation, actually is int16

ponytaill commented 2 months ago

I get this, weight is fake int4, in calculation, actually is int16

If it's convenient for you, could you explain it?