mit-han-lab / llm-awq

[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
MIT License
2.38k stars 184 forks source link

Memory increases significantly during inference #196

Open xpq-tech opened 4 months ago

xpq-tech commented 4 months ago

We used AWQ to quantize a model with the same architecture as LLaMA2. After quantization, the VRAM usage during loading was only 6567M, but the VRAM usage reached 32223M when generating up to 500 tokens during inference. Is this characteristic inherent to AWQ, or is there an error in our implementation? Looking forward to your response.