NVIDIA / TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://nvidia.github.io/TensorRT-Model-Optimizer
Other
574 stars 43 forks source link

feat: SDXL clear origin pt mem after loaded trt engine #100

Open wxsms opened 2 weeks ago

wxsms commented 2 weeks ago

without this a cuda OOM will occur in 4090 device.

kevalmorabia97 commented 2 weeks ago

Hi @wxsms thank you for your contribution. unfortunately we are not able to accept external contributions yet until we have proper contributor guidelines and approvals. @jingyu-ml can you please verify this and make sure this makes it into the example before the next release?

jingyu-ml commented 2 weeks ago

Thanks for the contribution, the fix looks good to me. We will push the change to main branch together with this issue asap as Keval said.