-
### Describe the issue
While initializing ONNX Runtime on a mobile device, I encountered an issue with the `QNNExecutionProvider` failing to load the backend library `libQnnHtpVXX.so`. The error log …
-
### Describe the issue
Hello, I am interested in using v1.20.0 on openvino hardware as the new version claims to have optimized first inference latency. It seems that v1.20.0 has been released for [o…
-
I am having a lot of trouble with speaker diarization across lots of different platforms and models.
```toml
[target.'cfg(any(windows, target_os = "linux"))'.dependencies]
sherpa-rs = { version =…
-
### What happened?
for the given IR
```mlir
module {
func.func @torch_jit(%arg2: !torch.vtensor) -> !torch.vtensor attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_versi…
-
### System Info
```shell
Optimum version: 1.22.0
Platform: Linux (Ubuntu 22.04.4 LTS)
Python version: 3.12.2
ONNX Runtime Version: 1.19.2
CUDA Version: 12.1
CUDA Execution Provider: Yes (CUDA…
-
### Describe the issue
How to include libnvinfer_plugin in trt_extra_plugin_lib_paths on Windows? Im using python. And with that i want to include efficient NMS plugin.
### To reproduce
ort_session…
-
### 🐛 Describe the bug
Found in this example
https://github.com/pytorch/vision/blob/e65a857b5487a8493bc8a80a95d64d9f049de347/torchvision/models/detection/faster_rcnn.py#L373
Following this exam…
-
### Describe the issue
I try to infer an onnx model in jetson agx orin with Jetpack 6.1. The cuda is 12.6 and the cudnn is 9.3. I find on the website it says onnxruntime-19.0 supports cudnn 9.x but w…
-
### Describe the issue
we currently use large max_length in beam search, but we got max_length
-
Hi, I was converting GFPGANv1.3.pth to onnx format. But I got a error when I try to inference.
onnx: 1.17.0
onnxruntime: 1.19.2
torch: 2.4.1+cu121
onnxsim: 0.4.36
```
import torch
import on…