NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.82k stars 2.14k forks source link

Issue with MaskRCNN Model Conversion to TensorRT at VGA Resolution (640x480) #3600

Open TheShiningVampire opened 10 months ago

TheShiningVampire commented 10 months ago

Description

I am encountering issues when converting a MaskRCNN model, trained using Detectron 2, to TensorRT for VGA resolution (640x480). I have followed the standard conversion process as outlined in the ./samples/python/detectron2 directory but made modifications to the augmentation settings to suit the VGA resolution.

Steps to Reproduce

  1. Trained the MaskRCNN model using Detectron 2 for a resolution of 640x480.
  2. Modified the augmentation settings in the conversion script from:

    aug = T.ResizeShortestEdge(
       [1344, 1344], 1344
    )

    to

    aug = T.ResizeShortestEdge(
       [480, 480], 640
    )
  3. Converted the model to ONNX format (the ONNX graph can be seen here).
  4. Converted the ONNX model to a TensorRT engine.

Expected Behavior

I expected the converted TensorRT model to maintain a similar level of accuracy and detection capability as the original Detectron 2 model.

Observed Behavior

After conversion to TensorRT:

Environment

TensorRT Version: 8.6.1.6 NVIDIA GPU: A5000 NVIDIA Driver Version: 525.85.12 CUDA Version: 12.0 CUDNN Version: 8.9.0 Operating System: ubuntu20.04 Python Version (if applicable): 3.8.13 PyTorch Version (if applicable): 2.1

Questions and Requests for Help

Thank you in advance for any help or insights provided.

RajUpadhyay commented 10 months ago

Does this explaination help? Atleast the first para might help you with the explaination of why your resolution might not be working as expected.