NVIDIA / TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://nvidia.github.io/TensorRT-Model-Optimizer
Other
540 stars 39 forks source link

quantification for SD1.5 #99

Open zeng121 opened 2 weeks ago

zeng121 commented 2 weeks ago

Whether SD1.5 custom attention quantization is supported?

jingyu-ml commented 1 week ago

I'm not sure what you mean by custom attention. Are you referring to creating your own attention layer?