NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.81k stars 2.13k forks source link

pytorch_quantization fails to quantize nn.ConvTranspose2d when using pytorch >= 1.13 #3129

Closed BloodAxe closed 1 year ago

BloodAxe commented 1 year ago

Description

We encountered the issue when trying to quantize Yolo-NAS model that uses nn.ConvTranspose2d modules: https://github.com/Deci-AI/super-gradients/issues/1045

It seems that in your QuantConvTranspose2d class you're using some internal API for pytorch classes. Namely, class uses _output_padding method function from torch.nn.modules.conv._ConvTransposeNd which is a part of private API.

It looks like after pytorch 1.11 this API has been changed slightly and now function call fails since on newer version of pytorch this method require two additional arguments.

Environment

TensorRT Version: N/A

NVIDIA GPU: 3090

NVIDIA Driver Version: 525.60.13

CUDA Version: 11.7

CUDNN Version: N/A

pytorch_quantization: 2.1.2

Operating System:

Python Version (if applicable): All versions

Tensorflow Version (if applicable):

PyTorch Version (if applicable): 1.13, 2.0

Baremetal or Container (if so, version): Both

Relevant Files

Steps To Reproduce

Following code fails when executed in torch 1.13:

    model = QuantConvTranspose2d(3, 3, 3, stride=2, padding=1, output_padding=1, bias=False)
    model(torch.randn(1, 3, 32, 32))

Suggested Fix

We were able to quantize & calibrate model without any issues with the following fix:

class QuantConvTranspose2d(_QuantConvTransposeNd):
    def forward(self, input, output_size=None):
        ...

        if torch_version_is_greater_or_equal(1, 12):
            output_padding = self._output_padding(input=input, output_size=output_size, stride=self.stride, padding=self.padding, kernel_size=self.kernel_size,
                                                  num_spatial_dims=2, dilation=self.dilation)
        else:
            output_padding = self._output_padding(input=input, output_size=output_size, stride=self.stride, padding=self.padding, kernel_size=self.kernel_size)
        ...

We are happy to make a PR if the suggested solution looks good to you.

ttyio commented 1 year ago

Thanks @BloodAxe , this is already fixed internally, and will mirror to github repo in the next monthly release.

ttyio commented 1 year ago

fixed in 23.08 release, version 2.1.3, and will also upgrade in pypi soon. closing and thanks!

gitctrlx commented 1 year ago

Hello, I apologize for disturbing you in your free time. May I ask what is the highest version of PyTorch that pytorch_quantization supports?

gitctrlx commented 1 year ago

已在 23.08 版本 2.1.3 中修复,并且很快也会在 pypi 中升级。结束并感谢!

Hello, I apologize for disturbing you in your free time. May I ask what is the highest version of PyTorch that pytorch_quantization supports?