TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization and sparsity. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/6355a47712a6c1a7a7ffd6af75bd6bfb84ac5b21/diffusers/quantization/utils.py#L80C23-L80C49
when enable smoothquant , image quality is not good.