Vahe1994 / AQLM

Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.pdf and PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression https://arxiv.org/abs/2405.14852
Apache License 2.0
1.17k stars 176 forks source link

How long does it take to quantize? #32

Closed fahadh4ilyas closed 6 months ago

fahadh4ilyas commented 8 months ago

I'm been using quantization tools like GPTQ, Exllama, or QUIP#. Those tools is quite fast to do quantization in a single A6000 gpu. But, this tool takes a really long time even though I'm using two A6000 gpu. How long does it take for quantizing Mistral 7B using two A6000 gpu and this parameters:

python main.py ../models/my-mistral-7B wikitext2 --nsamples=1024 --num_codebooks=1 --nbits_per_codebook=16 --in_group_size=8 --relative_mse_tolerance=0.01 --finetune_relative_mse_tolerance=0.001 --finetune_batch_size=32 --local_batch_size=1 --save ../models/my-mistral-7B-AQLM --model_seqlen 8192 --offload_activations
iamwavecut commented 8 months ago

See #28

github-actions[bot] commented 7 months ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 6 months ago

This issue was closed because it has been inactive for 14 days since being marked as stale.