Open dedoogong opened 6 days ago
I found only "from modelopt.torch._deploy._runtime import RuntimeRegistry" takes 20GB GPU! and I debugged further and then I found right after the debug pointer passes AWQClipHelper() of int4.py in onnx.quantization, it takes 20GB suddenly! why? how to solve it? I even use int8 PTQ! not int4!
Hello! I converted Segmennter(ViT-Tiny) and Depth Anything(ViT-Small) and the size of both onnx, trt files are under 30 MB. I got the compiled trt engine using onnx_ptq code. When I load the compiled small trt engine, the GPU memory usage is increased to almost 24 GB. Original torch model uses just around 2 GB GPU memory.
Plus, so oftenly, I can't run PTQ like entropy or minmax for int8 using 512x512 size images and I had to always reduce the image size to 224x224 or 256x256 to avoid OOM during PTQ. it also seems to be related!
Why this happens? How to avoid it? The inference speed is increased 3~4 times and accuracy was dropped slightly. So only the ultra hige memory usage problem is the only thing to solve now. If someone know how to handle it, please help me!
Thank you!