mit-han-lab / llm-awq

[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
MIT License
2.38k stars 184 forks source link

Support Llama3 and update on-the-fly rope scaling #177

Closed kentang-mit closed 5 months ago

kentang-mit commented 5 months ago

Work with @ys-2020 to add support for Llama3, and support on-the-fly RoPE scaling (used for long context extension).