TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
Hi, guys, Im wondering would smoothquant be supported in the future for int8 onnx quant? Mainly for VIT-like model and LLM.