intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
172 stars 20 forks source link

support `transformers.Conv1D` packing #118

Closed Kaihui-intel closed 3 months ago

Kaihui-intel commented 3 months ago

Add transformers.Conv1D check

Kaihui-intel commented 3 months ago

CI failed due to export_to_autoround. Not related to this PR