-
### 🐛 Describe the bug
Hello!
I planned to export a network based on graph neural network as an onnx model, but it always failed. Then I found that exporting a single-layer network `torch_geomet…
-
After converting my PyTorch model to ONNX format, I noticed an issue with CUDA memory management. When processing a large input, the CUDA memory usage spikes as expected. However, for subsequent small…
-
## background
My question is about executing encoder-decoder models with onnx genai runtime. My goal is to convert the DONUT transformer https://arxiv.org/abs/2111.15664, a sequence-to-sequence tra…
-
### Describe the issue
Simple model with GEMM(DQ(Q(input0)), DQ(Q(input1)) quantizing FP32 -> FP8E4M3 fails to run using the CPU EP. It is runnable when using the CUDA EP.
An identical model using…
-
### Describe the issue
With the new version 1.18 it seems that trying to use different InferenceSession using the same DirectML device, all threads remain stalled without giving any exception or er…
-
### Describe the issue
Many users reach out that `GPU.0`, `GPU.1` etc. is not working for them when creating a InferenceSession. According to the [documentation](https://docs.openvino.ai/2024/openv…
-
Please help with pose estimation for rk3588
### Pose
https://github.com/WongKinYiu/yolov7/tree/pose
```
W __init__: rknn-toolkit2 version: 1.4.0-22dcfef4
--> Loading model
W load_onnx: It …
-
[ONNX](https://onnx.ai/) is an open-standard for machine learning interoperability. Allowing developers to utilize different backends to execute their models.
The backend implementation is relati…
-
@ricky0123 You're doing amazing work. The library is amazing but sometimes it does not work for use cases like chrome extension.
so I created a direct Next.js example which doesn't require any depe…
-
### Library name
ONNX Runtime
### New version number
v1.17.0
### Other information that may be useful (release notes, etc...)
Currently `vcpkg` port only supports the GPU version of ONNX Runtime…