mit-han-lab / llm-awq

[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
MIT License
2.38k stars 184 forks source link

AWQ for non-transformer layers? #164

Open satabios opened 6 months ago

satabios commented 6 months ago

How should one go about using it on a non-transformer layer? Any suggestions would be helpful. Thanks in advance.