I am profiling performance of onnx model converted from PyTorch2.3.0-cu11.8, it shows that the performance is little bit slower than the torch version. Is there something i missed?
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
I am profiling performance of onnx model converted from PyTorch2.3.0-cu11.8, it shows that the performance is little bit slower than the torch version. Is there something i missed?
To reproduce
script
![image](https://github.com/microsoft/onnxruntime/assets/126441921/7991a70c-65da-4b2d-8544-2832a465ae96)
Urgency
Really urgent, paper deadline is incoming!!!
Platform
Linux
OS Version
ubuntu22.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.18.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA11.8, onnx1.16.1
Model File
No response
Is this a quantized model?
No