I am trying to encapsulate the torchvision.ops.nms function in an onnx model, there is no problem in the model conversion and inference but the output of the derived onnx model differs from the torchvision implementation.
I am trying to create a custom model architecture that uses NMS internally. However, due to the difference in NMS, my model's output is hugely different from that of the original PyTorch model. Due to this I'm stuck at deploying it via onnxruntime, as is pretty urgent
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
I am trying to encapsulate the torchvision.ops.nms function in an onnx model, there is no problem in the model conversion and inference but the output of the derived onnx model differs from the torchvision implementation.
To reproduce
Urgency
I am trying to create a custom model architecture that uses NMS internally. However, due to the difference in NMS, my model's output is hugely different from that of the original PyTorch model. Due to this I'm stuck at deploying it via onnxruntime, as is pretty urgent
Platform
Windows
OS Version
11
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.18.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response