NVIDIA / TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization and sparsity. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://nvidia.github.io/TensorRT-Model-Optimizer
Other
299 stars 16 forks source link

int8 diffuser smoothquant will not generate good images #32

Open 13301338176 opened 2 weeks ago

13301338176 commented 2 weeks ago

https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/6355a47712a6c1a7a7ffd6af75bd6bfb84ac5b21/diffusers/quantization/utils.py#L80C23-L80C49

when enable smoothquant , image quality is not good.

jingyu-ml commented 1 week ago

Which models did you use, and could you share your command lines and detailed settings?