microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.71k stars 2.93k forks source link

NMS Operator Output Different From Torchvision Implementation #21898

Open K-prog opened 2 months ago

K-prog commented 2 months ago

Describe the issue

I am trying to encapsulate the torchvision.ops.nms function in an onnx model, there is no problem in the model conversion and inference but the output of the derived onnx model differs from the torchvision implementation.

To reproduce

import torch
import torchvision
import onnxruntime
import numpy as np

class NMS(torch.nn.Module):
    def __init__(self, iou_threshold=0.45):
        super(NMS, self).__init__()
        self.iou_threshold = iou_threshold

    def forward(self, x):
        boxes = x[:, :4]
        scores = x[:, 4]
        keep = torchvision.ops.nms(boxes, scores, self.iou_threshold)
        return keep

# Test data
nms_x = torch.rand(50, 38)

# PyTorch model
nms_model = NMS()
torch_output = nms_model(nms_x)

# Export to ONNX
torch.onnx.export(nms_model, 
                  (nms_x,),
                  "nms.onnx",
                  opset_version=17, 
                  input_names=["input"],
                  output_names=["output"],
                  #dynamic_axes={'input': {0: 'batch'}, 'output': {0: 'batch'}}
                  )

# ONNX Runtime inference
session = onnxruntime.InferenceSession("nms.onnx")

onnx_output = session.run(None, {'input': nms_x.numpy()})[0]

print("PyTorch output shape:", torch_output.shape)
print("ONNX output shape:", onnx_output.shape)

image

Urgency

I am trying to create a custom model architecture that uses NMS internally. However, due to the difference in NMS, my model's output is hugely different from that of the original PyTorch model. Due to this I'm stuck at deploying it via onnxruntime, as is pretty urgent

Platform

Windows

OS Version

11

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.