casper-hansen / AutoAWQ

AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
https://casper-hansen.github.io/AutoAWQ/
MIT License
1.67k stars 202 forks source link

Can you give me some advices about parameters setting? #612

Open lzcchl opened 2 weeks ago

lzcchl commented 2 weeks ago

My use case and GPU:
model: Qwen2-72B-Instruct max_token_len (input+output): 20000 gpus: 4xA100

when I use code from https://github.com/casper-hansen/AutoAWQ/blob/main/docs/examples.md and change parameters in function model.quantize as below:

model = AutoAWQForCausalLM.from_pretrained(
    model_path, **{"low_cpu_mem_usage": True, "use_cache": False}
)

model.quantize(
    tokenizer,
    quant_config=quant_config,
    calib_data=load_my_data(),
    n_parallel_calib_samples=1,
    max_calib_samples=128,
    max_calib_seq_len=20000
)

but It run OOM, and use only one gpu, I set device_map='auto', but It OOM again, how can I change parameters for run well ?

casper-hansen commented 2 weeks ago

You can't use such a long sequence length because it does not fit in memory.