NVIDIA / TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://nvidia.github.io/TensorRT-Model-Optimizer
Other
580 stars 44 forks source link

how to reduce memory usage? #107

Open dedoogong opened 6 days ago

dedoogong commented 6 days ago

Hello! I converted Segmennter(ViT-Tiny) and Depth Anything(ViT-Small) and the size of both onnx, trt files are under 30 MB. I got the compiled trt engine using onnx_ptq code. When I load the compiled small trt engine, the GPU memory usage is increased to almost 24 GB. Original torch model uses just around 2 GB GPU memory.

Plus, so oftenly, I can't run PTQ like entropy or minmax for int8 using 512x512 size images and I had to always reduce the image size to 224x224 or 256x256 to avoid OOM during PTQ. it also seems to be related!

Why this happens? How to avoid it? The inference speed is increased 3~4 times and accuracy was dropped slightly. So only the ultra hige memory usage problem is the only thing to solve now. If someone know how to handle it, please help me!

Thank you!

dedoogong commented 20 hours ago

I found only "from modelopt.torch._deploy._runtime import RuntimeRegistry" takes 20GB GPU! and I debugged further and then I found right after the debug pointer passes AWQClipHelper() of int4.py in onnx.quantization, it takes 20GB suddenly! why? how to solve it? I even use int8 PTQ! not int4!